Splendid  Dylan thanks.

In a typical Star schema you have a FACT and DIMENSION tables through whivh
one uses analytical functions to slice and dice so to speak.

Does Incorta uses similar concepts but in memory? If that is the case can
perform similar concepts in memory. For example Oracle 12c in memory option
creates columnar views in-memory for this purpose and uses bi-map indexes
typically for low cardinality columns. How different is this to the way
this tool does?

Many thanks,

Mich



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 12 August 2017 at 20:17, Dylan Wan <dylan....@gmail.com> wrote:

> Yes, it is implemented and already went live in several big companies in
> bay area.
>
> Spark Python is being used as the language for doing the typical data
> transformation jobs when necessary.  It is totally optional.
>
> The data are stored in a Incorta proprietary format when they are
> presented in memory and ready to serve the query requests.   When the data
> are stored in disk, the data are first prepared in Parquet as staging area,
> but will be backed up as memory dump.
>
> Typically a star schema is required for handling the data in a relational
> database.  The star schema is a modeling design to speed up the query
> performance.  You do not need to do that when the data is presented in
> memory.  There are better way than doing the joins using (bitmap) indexes,
> surrogate key, etc.  That is Incorta
>
> Hope this helps.
>
> Dylan
>
> Linkedin profile: https://www.linkedin.com/in/dylanwan/
> Blog : dylanwan.wordpress.com
>
>
>
> On Wed, Aug 9, 2017 at 12:45 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi,
>>
>> There is a tool called incorta that uses Spark, Parquet and open source
>> big data analytics libraries.
>> Its aim is to accelerate Analytics. It claims that it incorporates
>> Direct Data Mapping to deliver near real-time analytics on top of original,
>> intricate, transactional data such as ERP systems. Direct Data Mapping
>> executes real-time joins with aggregations. It is designed to eradicate
>> cumbersome, time-consuming ETL routines, dimensional data stores and
>> traditional OLAP semantic layers.
>>
>> So a lot of talk but very little light. It claims that there is no need
>> for star schema and other DW design schemes. So I was wondering anyone has
>> come across it?
>>
>>
>> Some stuff here
>>
>> http://www.jenunderwood.com/2017/04/11/accelerating-analytic
>> s-incorta-direct-data-mapping/
>>
>>
>> Thanks,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>
>
>
> --
> Dylan Wan
> Solution Architect - Enterprise Apps
> Email: dylan....@gmail.com
> My Blog: dylanwan.wordpress.com
>
>

Reply via email to