I will add some code for benchmarking as soon as possible.
On Fri, Jan 20, 2012 at 7:36 PM, Antonio Valentino
wrote:
> Hi Francesc, hi Ümit,
>
> Il 20/01/2012 15:16, Francesc Alted ha scritto:
>> 2012/1/20 Ümit Seren
>>
>>> So I played around a little bit futher.
>>> I tried to use my new solut
Hi Francesc, hi Ümit,
Il 20/01/2012 15:16, Francesc Alted ha scritto:
> 2012/1/20 Ümit Seren
>
>> So I played around a little bit futher.
>> I tried to use my new solution (using table.append instead of
>> row.append) with following settings in table.openFile()
>> METADATA_CACHE_SIZE=2*1024*1024
2012/1/20 Ümit Seren
> So I played around a little bit futher.
> I tried to use my new solution (using table.append instead of
> row.append) with following settings in table.openFile()
> METADATA_CACHE_SIZE=2*1024*1024
> NODE_CACHE_SLOTS=1024
>
> I saw the same performance problems like in the pr
So I played around a little bit futher.
I tried to use my new solution (using table.append instead of
row.append) with following settings in table.openFile()
METADATA_CACHE_SIZE=2*1024*1024
NODE_CACHE_SLOTS=1024
I saw the same performance problems like in the previous code. So it
slows down after
2012/1/18 Ümit Seren
> Hi Francesc,
> I will try to get some numbers as soon as I have some time at hand.
> However I am not sure if I can come with an absolute number.
> It seems that at the beginning (first 1000 tables) I see no
> performance penalty, however after that the performance quickly
Hi Francesc,
I will try to get some numbers as soon as I have some time at hand.
However I am not sure if I can come with an absolute number.
It seems that at the beginning (first 1000 tables) I see no
performance penalty, however after that the performance quickly
degrades. Does traversing/accessi
2012/1/17 Anthony Scopatz
>
>
> On Tue, Jan 17, 2012 at 4:35 AM, Ümit Seren wrote:
>
>> @Anthony:
>> Thanks for the quick reply.
>> I fixed my problem (I will get to it later) but first to my previous
>> problem:
>>
>> I actually made a mistake in my previous mail.
>> My setup is the following:
On Tue, Jan 17, 2012 at 4:35 AM, Ümit Seren wrote:
> @Anthony:
> Thanks for the quick reply.
> I fixed my problem (I will get to it later) but first to my previous
> problem:
>
> I actually made a mistake in my previous mail.
> My setup is the following: I have around 29 groups. In each of
>
@Anthony:
Thanks for the quick reply.
I fixed my problem (I will get to it later) but first to my previous problem:
I actually made a mistake in my previous mail.
My setup is the following: I have around 29 groups. In each of
these groups I have 5 result tables.
Each of these tables contains
On Mon, Jan 16, 2012 at 12:43 PM, Ümit Seren wrote:
> I created a hdf5 file with pytables which contains around 29 000
> tables with around 31k rows each.
> I am trying to create a caching table in the same hdf5 file which
> contains a subset of those 29 000 tables.
>
> I wrote a script which bas
I created a hdf5 file with pytables which contains around 29 000
tables with around 31k rows each.
I am trying to create a caching table in the same hdf5 file which
contains a subset of those 29 000 tables.
I wrote a script which basically iterates through each of the 29 000
tables retrieves a sub
11 matches
Mail list logo