Re: Ignite as distributed file storage

2018-06-30 Thread Dmitry Pavlov
I defenetely support adding this functionality.

As Ignite user I develop MTCGA Bot, this tool stores test results from
previous TC runs. In addition to test result it also stores thread dump
and, sometimes, logs. It would be very convenient and more productive to
store this data in such file store over Ignite. Of course, if API is
convenient enough.

So, it is perfectly ok if such donation would be made to the product.

сб, 30 июн. 2018 г. в 17:24, Pavel Kovalenko :

> Igniters,
>
> I would like to start a discussion about designing a new feature because I
> think it's time to start making steps towards it.
> I noticed, that some of our users have tried to store large homogenous
> entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.
>
> IGFS project has the possibility to do it, but as for me it has one big
> disadvantage - it's in-memory only, so users have a strict size limit of
> their data and have data loss problem.
>
> Our durable memory has a possibility to persist a data that doesn't fit to
> RAM to disk, but page structure of it is not supposed to store large pieces
> of data.
>
> There are a lot of projects of distributed file systems like HDFS,
> GlusterFS, etc. But all of them concentrate to implement high-grade file
> protocol, rather than user-friendly API which leads to high entry threshold
> to start implementing something over it.
> We shouldn't go in this way. Our main goal should be providing to user easy
> and fast way to use file storage and processing here and now.
>
> If take HDFS as closest possible by functionality project, we have one big
> advantage against it. We can use our caches as files metadata storage and
> have the infinite possibility to scale it, while HDFS is bounded by
> Namenode capacity and has big problems with keeping a large number of files
> in the system.
>
> We achieved very good experience with persistence when we developed our
> durable memory, and we can couple together it and experience with services,
> binary protocol, I/O and start to design a new IEP.
>
> Use cases and features of the project:
> 1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
> overhead and data loss possibility.
> 2) Easy, pluggable, fast and distributed file processing, transformation
> and analysis. (E.g. ImageMagick processor for images transformation,
> LuceneIndex for texts, whatever, it's bounded only by your imagination).
> 3) Scalability out of the box.
> 4) User-friendly API and minimal steps to start using this storage in
> production.
>
> I repeated again, this project is not supposed to be a high-grade
> distributed file system with full file protocol support.
> This project should primarily focus on target users, which would like to
> use it without complex preparation.
>
> As for example, a user can deploy Ignite with such storage and web-server
> with REST API as Ignite service and get scalable, performant image server
> out of the box which can be accessed using any programming language.
>
> As a far target goal, we should focus on storing and processing a very
> large amount of the data like movies, streaming, which is the big trend
> today.
>
> I would like to say special thanks to our community members Alexey Stelmak
> and Dmitriy Govorukhin which significantly helped me to put together all
> pieces of that puzzle.
>
> So, I want to hear your opinions about this proposal.
>


data extractor

2018-06-30 Thread Dmitriy Govorukhin
Igniters,

I am working on IGNITE-7644
 (export all key-value
data from a persisted partition),
it will be command line tool for extracting data from Ignite partition file
without the need to start node.
The main motivation is to have a lifebuoy in case if a file has damage for
some reason.

I suggest simple API and two commands for the first implementation:

-c
--CRC [srcPath] - check CRC for all(or by type) pages in partition

-e
--extract [srcPath] [outPath] - dump all survey data from partition to
another file with raw key/value pair format
(required graceful stop for a node, not necessary after --restore will be
implemented)

Output file format see in attached, this format does not contain any index
inside but it is very simple and
flexible for future works with raw key/value data.

Future features:
-u
--upload - reload raw key/value pairs to node

-s
--status - check current node file status, need binary recovery or not
(node crash on the middle of a checkpoint)

-r
--restore - restore binary consistency (finish checkpoint, required WAL
file for recovery)

Let's start a discussion, any comments are welcome.


Re: Ignite as distributed file storage

2018-06-30 Thread Vladimir Ozerov
Pavel,

Can you provide competitive analysis with other storage solutions? What
products will we compete with? What would be our advantages against them?

I talked to several folks working on solutions involving video and image
processing. They are rarely use any databases or grids. Neither they need
transactions, sync backups, etc. Instead, this is more about hardware, load
balancing, etc. IMO this is completely out of the scope of Ignite. This is
why we need concrete usage scenarios to explain why we need it.

Also please note that most use cases of full text search, XML and JSON do
not need any special storage. They only need new index types. And efficient
CLOBs / BLOBs is a matter of moving these pieces out of BinaryObject. None
of these require anything radically new in the product.

сб, 30 июня 2018 г. в 20:59, Pavel Kovalenko :

> Dmitriy,
>
> Yes, I have approximate design in my mind. The main idea is that we already
> have distributed cache for files metadata (our Atomic cache), the data flow
> and distribution will be controlled by our AffinityFunction and Baseline.
> We're already have discovery and communication to make such local files
> storages to be synced. The files data will be separated to large blocks
> (64-128Mb) (which looks very similar to our WAL). Each block can contain
> one or more file chunks. The tablespace (segment ids, offsets and etc.)
> will be stored to our regular page memory. This is key ideas to implement
> first version of such storage. We already have similiar components in our
> persistence, so this experience can be reused to develop such storage.
>
> Denis,
>
> Nothing significant should be changed at our memory level. It will be
> separate, pluggable component over cache. Most of the functions which give
> performance boost can be delegated to OS level (Memory mapped files, DMA,
> Direct write from Socket to disk and vice versa). Ignite and File Storage
> can develop independetly of each other.
>
> Alexey Stelmak, which has a great experience with developing such systems
> can provide more low level information about how it should look.
>
> сб, 30 июн. 2018 г. в 19:40, Dmitriy Setrakyan :
>
> > Pavel, it definitely makes sense. Do you have a design in mind?
> >
> > D.
> >
> > On Sat, Jun 30, 2018, 07:24 Pavel Kovalenko  wrote:
> >
> > > Igniters,
> > >
> > > I would like to start a discussion about designing a new feature
> because
> > I
> > > think it's time to start making steps towards it.
> > > I noticed, that some of our users have tried to store large homogenous
> > > entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.
> > >
> > > IGFS project has the possibility to do it, but as for me it has one big
> > > disadvantage - it's in-memory only, so users have a strict size limit
> of
> > > their data and have data loss problem.
> > >
> > > Our durable memory has a possibility to persist a data that doesn't fit
> > to
> > > RAM to disk, but page structure of it is not supposed to store large
> > pieces
> > > of data.
> > >
> > > There are a lot of projects of distributed file systems like HDFS,
> > > GlusterFS, etc. But all of them concentrate to implement high-grade
> file
> > > protocol, rather than user-friendly API which leads to high entry
> > threshold
> > > to start implementing something over it.
> > > We shouldn't go in this way. Our main goal should be providing to user
> > easy
> > > and fast way to use file storage and processing here and now.
> > >
> > > If take HDFS as closest possible by functionality project, we have one
> > big
> > > advantage against it. We can use our caches as files metadata storage
> and
> > > have the infinite possibility to scale it, while HDFS is bounded by
> > > Namenode capacity and has big problems with keeping a large number of
> > files
> > > in the system.
> > >
> > > We achieved very good experience with persistence when we developed our
> > > durable memory, and we can couple together it and experience with
> > services,
> > > binary protocol, I/O and start to design a new IEP.
> > >
> > > Use cases and features of the project:
> > > 1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
> > > overhead and data loss possibility.
> > > 2) Easy, pluggable, fast and distributed file processing,
> transformation
> > > and analysis. (E.g. ImageMagick processor for images transformation,
> > > LuceneIndex for texts, whatever, it's bounded only by your
> imagination).
> > > 3) Scalability out of the box.
> > > 4) User-friendly API and minimal steps to start using this storage in
> > > production.
> > >
> > > I repeated again, this project is not supposed to be a high-grade
> > > distributed file system with full file protocol support.
> > > This project should primarily focus on target users, which would like
> to
> > > use it without complex preparation.
> > >
> > > As for example, a user can deploy Ignite with such storage and
> web-server
> > > with REST API as Ignite service and get 

Re: Ignite as distributed file storage

2018-06-30 Thread Pavel Kovalenko
Dmitriy,

Yes, I have approximate design in my mind. The main idea is that we already
have distributed cache for files metadata (our Atomic cache), the data flow
and distribution will be controlled by our AffinityFunction and Baseline.
We're already have discovery and communication to make such local files
storages to be synced. The files data will be separated to large blocks
(64-128Mb) (which looks very similar to our WAL). Each block can contain
one or more file chunks. The tablespace (segment ids, offsets and etc.)
will be stored to our regular page memory. This is key ideas to implement
first version of such storage. We already have similiar components in our
persistence, so this experience can be reused to develop such storage.

Denis,

Nothing significant should be changed at our memory level. It will be
separate, pluggable component over cache. Most of the functions which give
performance boost can be delegated to OS level (Memory mapped files, DMA,
Direct write from Socket to disk and vice versa). Ignite and File Storage
can develop independetly of each other.

Alexey Stelmak, which has a great experience with developing such systems
can provide more low level information about how it should look.

сб, 30 июн. 2018 г. в 19:40, Dmitriy Setrakyan :

> Pavel, it definitely makes sense. Do you have a design in mind?
>
> D.
>
> On Sat, Jun 30, 2018, 07:24 Pavel Kovalenko  wrote:
>
> > Igniters,
> >
> > I would like to start a discussion about designing a new feature because
> I
> > think it's time to start making steps towards it.
> > I noticed, that some of our users have tried to store large homogenous
> > entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.
> >
> > IGFS project has the possibility to do it, but as for me it has one big
> > disadvantage - it's in-memory only, so users have a strict size limit of
> > their data and have data loss problem.
> >
> > Our durable memory has a possibility to persist a data that doesn't fit
> to
> > RAM to disk, but page structure of it is not supposed to store large
> pieces
> > of data.
> >
> > There are a lot of projects of distributed file systems like HDFS,
> > GlusterFS, etc. But all of them concentrate to implement high-grade file
> > protocol, rather than user-friendly API which leads to high entry
> threshold
> > to start implementing something over it.
> > We shouldn't go in this way. Our main goal should be providing to user
> easy
> > and fast way to use file storage and processing here and now.
> >
> > If take HDFS as closest possible by functionality project, we have one
> big
> > advantage against it. We can use our caches as files metadata storage and
> > have the infinite possibility to scale it, while HDFS is bounded by
> > Namenode capacity and has big problems with keeping a large number of
> files
> > in the system.
> >
> > We achieved very good experience with persistence when we developed our
> > durable memory, and we can couple together it and experience with
> services,
> > binary protocol, I/O and start to design a new IEP.
> >
> > Use cases and features of the project:
> > 1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
> > overhead and data loss possibility.
> > 2) Easy, pluggable, fast and distributed file processing, transformation
> > and analysis. (E.g. ImageMagick processor for images transformation,
> > LuceneIndex for texts, whatever, it's bounded only by your imagination).
> > 3) Scalability out of the box.
> > 4) User-friendly API and minimal steps to start using this storage in
> > production.
> >
> > I repeated again, this project is not supposed to be a high-grade
> > distributed file system with full file protocol support.
> > This project should primarily focus on target users, which would like to
> > use it without complex preparation.
> >
> > As for example, a user can deploy Ignite with such storage and web-server
> > with REST API as Ignite service and get scalable, performant image server
> > out of the box which can be accessed using any programming language.
> >
> > As a far target goal, we should focus on storing and processing a very
> > large amount of the data like movies, streaming, which is the big trend
> > today.
> >
> > I would like to say special thanks to our community members Alexey
> Stelmak
> > and Dmitriy Govorukhin which significantly helped me to put together all
> > pieces of that puzzle.
> >
> > So, I want to hear your opinions about this proposal.
> >
>


Re: Ignite as distributed file storage

2018-06-30 Thread Dmitriy Setrakyan
Pavel, it definitely makes sense. Do you have a design in mind?

D.

On Sat, Jun 30, 2018, 07:24 Pavel Kovalenko  wrote:

> Igniters,
>
> I would like to start a discussion about designing a new feature because I
> think it's time to start making steps towards it.
> I noticed, that some of our users have tried to store large homogenous
> entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.
>
> IGFS project has the possibility to do it, but as for me it has one big
> disadvantage - it's in-memory only, so users have a strict size limit of
> their data and have data loss problem.
>
> Our durable memory has a possibility to persist a data that doesn't fit to
> RAM to disk, but page structure of it is not supposed to store large pieces
> of data.
>
> There are a lot of projects of distributed file systems like HDFS,
> GlusterFS, etc. But all of them concentrate to implement high-grade file
> protocol, rather than user-friendly API which leads to high entry threshold
> to start implementing something over it.
> We shouldn't go in this way. Our main goal should be providing to user easy
> and fast way to use file storage and processing here and now.
>
> If take HDFS as closest possible by functionality project, we have one big
> advantage against it. We can use our caches as files metadata storage and
> have the infinite possibility to scale it, while HDFS is bounded by
> Namenode capacity and has big problems with keeping a large number of files
> in the system.
>
> We achieved very good experience with persistence when we developed our
> durable memory, and we can couple together it and experience with services,
> binary protocol, I/O and start to design a new IEP.
>
> Use cases and features of the project:
> 1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
> overhead and data loss possibility.
> 2) Easy, pluggable, fast and distributed file processing, transformation
> and analysis. (E.g. ImageMagick processor for images transformation,
> LuceneIndex for texts, whatever, it's bounded only by your imagination).
> 3) Scalability out of the box.
> 4) User-friendly API and minimal steps to start using this storage in
> production.
>
> I repeated again, this project is not supposed to be a high-grade
> distributed file system with full file protocol support.
> This project should primarily focus on target users, which would like to
> use it without complex preparation.
>
> As for example, a user can deploy Ignite with such storage and web-server
> with REST API as Ignite service and get scalable, performant image server
> out of the box which can be accessed using any programming language.
>
> As a far target goal, we should focus on storing and processing a very
> large amount of the data like movies, streaming, which is the big trend
> today.
>
> I would like to say special thanks to our community members Alexey Stelmak
> and Dmitriy Govorukhin which significantly helped me to put together all
> pieces of that puzzle.
>
> So, I want to hear your opinions about this proposal.
>


Re: Ignite as distributed file storage

2018-06-30 Thread Denis Magda
Hello Pavel,

Agree that our users want to store large entries occasionally. Got several
inquiries from those who are dealing with audio and video data sets.

What do you think have to be changed at our memory level so that we can
store such data efficiently?

Denis

On Saturday, June 30, 2018, Pavel Kovalenko  wrote:

> Igniters,
>
> I would like to start a discussion about designing a new feature because I
> think it's time to start making steps towards it.
> I noticed, that some of our users have tried to store large homogenous
> entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.
>
> IGFS project has the possibility to do it, but as for me it has one big
> disadvantage - it's in-memory only, so users have a strict size limit of
> their data and have data loss problem.
>
> Our durable memory has a possibility to persist a data that doesn't fit to
> RAM to disk, but page structure of it is not supposed to store large pieces
> of data.
>
> There are a lot of projects of distributed file systems like HDFS,
> GlusterFS, etc. But all of them concentrate to implement high-grade file
> protocol, rather than user-friendly API which leads to high entry threshold
> to start implementing something over it.
> We shouldn't go in this way. Our main goal should be providing to user easy
> and fast way to use file storage and processing here and now.
>
> If take HDFS as closest possible by functionality project, we have one big
> advantage against it. We can use our caches as files metadata storage and
> have the infinite possibility to scale it, while HDFS is bounded by
> Namenode capacity and has big problems with keeping a large number of files
> in the system.
>
> We achieved very good experience with persistence when we developed our
> durable memory, and we can couple together it and experience with services,
> binary protocol, I/O and start to design a new IEP.
>
> Use cases and features of the project:
> 1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
> overhead and data loss possibility.
> 2) Easy, pluggable, fast and distributed file processing, transformation
> and analysis. (E.g. ImageMagick processor for images transformation,
> LuceneIndex for texts, whatever, it's bounded only by your imagination).
> 3) Scalability out of the box.
> 4) User-friendly API and minimal steps to start using this storage in
> production.
>
> I repeated again, this project is not supposed to be a high-grade
> distributed file system with full file protocol support.
> This project should primarily focus on target users, which would like to
> use it without complex preparation.
>
> As for example, a user can deploy Ignite with such storage and web-server
> with REST API as Ignite service and get scalable, performant image server
> out of the box which can be accessed using any programming language.
>
> As a far target goal, we should focus on storing and processing a very
> large amount of the data like movies, streaming, which is the big trend
> today.
>
> I would like to say special thanks to our community members Alexey Stelmak
> and Dmitriy Govorukhin which significantly helped me to put together all
> pieces of that puzzle.
>
> So, I want to hear your opinions about this proposal.
>


Ignite as distributed file storage

2018-06-30 Thread Pavel Kovalenko
Igniters,

I would like to start a discussion about designing a new feature because I
think it's time to start making steps towards it.
I noticed, that some of our users have tried to store large homogenous
entries (> 1, 10, 100 Mb/Gb/Tb) to our caches, but without big success.

IGFS project has the possibility to do it, but as for me it has one big
disadvantage - it's in-memory only, so users have a strict size limit of
their data and have data loss problem.

Our durable memory has a possibility to persist a data that doesn't fit to
RAM to disk, but page structure of it is not supposed to store large pieces
of data.

There are a lot of projects of distributed file systems like HDFS,
GlusterFS, etc. But all of them concentrate to implement high-grade file
protocol, rather than user-friendly API which leads to high entry threshold
to start implementing something over it.
We shouldn't go in this way. Our main goal should be providing to user easy
and fast way to use file storage and processing here and now.

If take HDFS as closest possible by functionality project, we have one big
advantage against it. We can use our caches as files metadata storage and
have the infinite possibility to scale it, while HDFS is bounded by
Namenode capacity and has big problems with keeping a large number of files
in the system.

We achieved very good experience with persistence when we developed our
durable memory, and we can couple together it and experience with services,
binary protocol, I/O and start to design a new IEP.

Use cases and features of the project:
1) Storing XML, JSON, BLOB, CLOB, images, videos, text, etc without
overhead and data loss possibility.
2) Easy, pluggable, fast and distributed file processing, transformation
and analysis. (E.g. ImageMagick processor for images transformation,
LuceneIndex for texts, whatever, it's bounded only by your imagination).
3) Scalability out of the box.
4) User-friendly API and minimal steps to start using this storage in
production.

I repeated again, this project is not supposed to be a high-grade
distributed file system with full file protocol support.
This project should primarily focus on target users, which would like to
use it without complex preparation.

As for example, a user can deploy Ignite with such storage and web-server
with REST API as Ignite service and get scalable, performant image server
out of the box which can be accessed using any programming language.

As a far target goal, we should focus on storing and processing a very
large amount of the data like movies, streaming, which is the big trend
today.

I would like to say special thanks to our community members Alexey Stelmak
and Dmitriy Govorukhin which significantly helped me to put together all
pieces of that puzzle.

So, I want to hear your opinions about this proposal.


[jira] [Created] (IGNITE-8902) GridDhtTxRemote sometimes not rolled back in one phase commit scenario.

2018-06-30 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-8902:
-

 Summary: GridDhtTxRemote sometimes not rolled back in one phase 
commit scenario.
 Key: IGNITE-8902
 URL: https://issues.apache.org/jira/browse/IGNITE-8902
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov
Assignee: Alexei Scherbakov
 Fix For: 2.6


Near node log:

{noformat}
2018-06-28 18:37:14,541][WARN ][sys-#77] The transaction was forcibly rolled 
back because a timeout is reached: 
GridNearTxLocal[xid=c8c6b184461--0871-da69--0010, 
xidVersion=GridCacheVersion [topVer=141679209, order=1530218114188, 
nodeOrder=16], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, 
state=MARKED_ROLLBACK, invalidate=false, rollbackOnly=true, 
nodeId=36f1c741-dc02-417a-a27d-fcbc90dd8cf1, timeout=100, duration=101, 
label=null]
{noformat}

{noformat}
[2018-06-28 18:37:14,560][ERROR][pool-356018-thread-1] Timeout (0 sec) is 
exceeded.
org.apache.ignite.transactions.TransactionTimeoutException: Failed to acquire 
lock within provided timeout for transaction [timeout=100, tx=GridDhtTxLocal 
[nearNodeId=36f1c741-dc02-417a-a27d-fcbc90dd8cf1, 
nearFutId=a8563574461-ec96bd57-6a94-4303-8ff5-56eaac137f30, nearMiniId=1, 
nearFinFutId=null, nearFinMiniId=0, nearXidVer=GridCacheVersion 
[topVer=141679209, order=1530218114188, nodeOrder=16], 
super=GridDhtTxLocalAdapter [nearOnOriginatingNode=false, nearNodes=[], 
dhtNodes=[06630e42-1c4d-4011-a388-4ec1dd1824fd], explicitLock=false, 
super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false, 
depEnabled=false, txState=IgniteTxStateImpl 
[activeCacheIds=[117538306,117541069], recovery=false, txMap=[IgniteTxEntry 
[key=KeyCacheObjectImpl [part=779, val=5899, hasValBytes=true], 
cacheId=117541069, txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=779, 
val=5899, hasValBytes=true], cacheId=117541069], val=[op=UPDATE, 
val=org.apache.ignite.scenario.internal.model.SampleObject [idHash=1226505441, 
hash=-1035741988, balance=100051, salary=1, fields=HashMap 
{field19=iiwxvrhxlpwqyixvpiregkuqpxuhtuir, 
field17=dyyxoefmichqvstteqjkbdpgmevifvmt, 
field18=iakcqzxcswsxncvztsotrjrlreuvpnsv, 
field22=wvewstllgkwvcxxujbkqkoihudgkkyve, 
field23=blgtxqcnwmexardyujbibiconowvyxvh, 
field20=mhvicfpnmptjreacgatiyobrmvvloxic, 
field21=bxajcavvwuhjvpugfoqohgulihzdbymr, 
field26=xceztfgnlpfoyciwnvhkorrgfllveocl, 
field27=sxzqvvckcgxgjctmygsibtouuzkfievo, 
field24=lsidfhurdjgjlmkrxyqbrdjzmbcicxie, 
field25=vfnmohbvezajifkqiwqbdqpulnynumfz, 
field28=zcewigkcryznakzsyzqzfdrbhklycjer, 
field29=vkctdybyrmtbitxuuqdlsrilxayorjjd, 
field11=lbwqnwwpwgewyjvlobyqwnvifuiggzio, 
field12=rmxclhojshtijttdjirppbkyudpvunht, 
field1=gvfrrpwkhmiziaortptiytwhviwjcpcr, 
field31=yktxbcjiyqfpaytacoajsiybtqocmezz, 
field0=vcorrbnevfunwssjzckdjlbvkynbogce, 
field10=sawaysrchykcvutlwfvglbvrlxvwlghh, 
field15=udrsigcjfetptnmlcnwjgccdqfmhdabv, 
field16=xjyjehlldwwnpbgjjtzwozqthwoefrin, 
field13=hwooamfugkijverkyqyzfccxvqrqjexx, 
field14=doxxkivwxqdhoozzsvwkkimgswrwoegj, 
field7=sxomkgtpjqyqpkrbxqnuknkmpzzpxuou, 
field6=urnknauwekxtgfbaqmesjwllzokdyktt, 
field9=yqhnowhjfrfueoryqlcvdnaddueliwyr, 
field8=nolotdhjdfyotpcvxnrxshaheofsisnd, 
field3=wijyypzycilbqvjirjkorjfrazfmptrj, 
field2=nvznimfolbszmwiosdpyimlvnbrbmxqx, 
field30=xnvglxqnyseduswirxbmxnwhyxlvptch, 
field5=vxzgcyngwzjpopxascdyltgvxcnckzvv, 
field4=gnweoorjfqsbtbsbeiwronzucyzpjwje}, key=5899]], prevVal=[op=NOOP, 
val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1, 
conflictExpireTime=-1, conflictVer=null, explicitVer=null, dhtVer=null, 
filters=[], filtersPassed=false, filtersSet=false, entry=GridDhtCacheEntry 
[rdrs=[], part=779, super=GridDistributedCacheEntry [super=GridCacheMapEntry 
[key=KeyCacheObjectImpl [part=779, val=5899, hasValBytes=true], 
val=org.apache.ignite.scenario.internal.model.SampleObject [idHash=1532725782, 
hash=-640361617, balance=10, salary=1, fields=HashMap 
{field19=iiwxvrhxlpwqyixvpiregkuqpxuhtuir, 
field17=dyyxoefmichqvstteqjkbdpgmevifvmt, 
field18=iakcqzxcswsxncvztsotrjrlreuvpnsv, 
field22=wvewstllgkwvcxxujbkqkoihudgkkyve, 
field23=blgtxqcnwmexardyujbibiconowvyxvh, 
field20=mhvicfpnmptjreacgatiyobrmvvloxic, 
field21=bxajcavvwuhjvpugfoqohgulihzdbymr, 
field26=xceztfgnlpfoyciwnvhkorrgfllveocl, 
field27=sxzqvvckcgxgjctmygsibtouuzkfievo, 
field24=lsidfhurdjgjlmkrxyqbrdjzmbcicxie, 
field25=vfnmohbvezajifkqiwqbdqpulnynumfz, 
field28=zcewigkcryznakzsyzqzfdrbhklycjer, 
field29=vkctdybyrmtbitxuuqdlsrilxayorjjd, 
field11=lbwqnwwpwgewyjvlobyqwnvifuiggzio, 
field12=rmxclhojshtijttdjirppbkyudpvunht, 
field1=gvfrrpwkhmiziaortptiytwhviwjcpcr, 
field31=yktxbcjiyqfpaytacoajsiybtqocmezz, 
field0=vcorrbnevfunwssjzckdjlbvkynbogce, 
field10=sawaysrchykcvutlwfvglbvrlxvwlghh, 
field15=udrsigcjfetptnmlcnwjgccdqfmhdabv, 
field16=xjyjehlldwwnpbgjjtzwozqthwoefrin, 
field13=hwooamfugkijverkyqyzfccxvqrqjexx,