Hi,
Thank you for exploitation.
I have question , what type of data distribution did you use when you
performed your benchmark, is it uniform distribution or ziphian
distribution (the last one used very often for business applications).
We use 2Q alghorithm for read cache, but results of benchmark depends on
law of data distribution which you use.



On Sat, Jan 17, 2015 at 9:21 AM, Mandark13 <[email protected]>
wrote:

>
> Hi Andrey
>
>       The tables in the db contain data ranging from simple key value
> objects to serialized objects. The following is a snapshot of the types of
> tables in the db
>
>
>    *Type 1*. Uniquely indexed key value tables in which I try to persist
> an object whose key field is uniquely indexed.
>      The variables in the object have the following types
>         a) primitives
>         b) Strings
>         c) Serialized fields
>
>   *Type 2*.  Objects which have both Uniquely Indexed fields and Non
> Uniquely Indexed fields
>       The variables in the object have the following types
>         a) primitives
>         b) Strings
>         c) Serialized fields
>
>   *Type 3*. Key Value tables with a unique key and a serialized value.
>
>   In all the above types of tables,
>
>   The db has around *78 tables* out of which* 19 tables fall under Type 2*.
> *9 tables fall under Type 3*. and the *rest Type 1*.
>
>
>   Every *read* call happens within *one single transaction*.
>
>   The reads involve fetching data from across all these types of
> tables(almost all 78)  and the data gets fetched using Unique keys and Non
> Unique Keys. All this within a single transaction.
>
>  When read happens for the very *first time*, fetching data for *50,000
> keys* which are a combination of uniquely indexed and non-uniquely
> indexed keys within one transaction, the total time taken for fetching all
> the data for the 50,000 keys is *361053 ms*.
>
>  Time for fetching data the *second time* for the *same* set of* 50,000
> keys* is *276512* *ms*
>
>  Time for fetching data the *third time* for the *same* set of *50,000
> keys* is *57768 ms*.
>
>  Time for fetching data the *fourth time* for the *same* set of *50,000
> keys* is *26626 ms*.
>
>  After some time say *half an hour* when i *repeat the same exercise,*
> the *same pattern* is repeated.
>
>  The basic understanding I get when I observe this pattern is that *speed
> of  fetch works on an LRU* basis.(Correct me if I am wrong)
>
>  *The application I work on, has strict time constraints and something
> has to be done to improve the read performance.* So my question is
>
>  *Are there any Configurations available to improve  the entire fetch
> process? Increasing the DISK_CACKE_BUFFER_SIZE will be of any use? Or is
> there anything else I can do?*
>
>
>
> On Friday, January 9, 2015 at 12:05:53 PM UTC+5:30, Mandark13 wrote:
>
>>
>> Hi,
>>
>>
>>             I am a new Orient DB user. I have created a database which
>> has a combination of Document data and Serialized data. I have a total of
>> 78 tables out of which 17 tables have non unique indices and the remaining
>> have unique indices. The total number of records across all tables is
>> pretty huge and amounts to around 5 crores. I am finding the fetch response
>> slow for the first few times when I try to fetch around 50 k items from
>> across all the tables within a single transaction. When I repeatedly try to
>> fetch the same 50 K items, I am observing the fetch time is faster after
>> say the fourth try. I understand this behaviour is because of the LRU
>> implementation. But my concern is, *is there any way I can speed up the
>> fetch the first few times? Any configurations can accelerate stuff?*
>>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "OrientDB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Best regards,
Andrey Lomakin.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to