Strangely I have not seen any email from Mike Segel. I got a response from
Knut Anders. However, compression doesn't seem to be helping. I have tried
with 10.4 - but the initial findings are not very satisfactory. Please can
you let me know your thoughts on the following questions I sent in my
previous email:
QUOTE:
I wanted to get
your pulse about whether Derby can respond in sub 100 millisec time with the
table sizes you see above?
I find that:
select category_master.category_name, count(category_master.category_name)
as category_count
from
(
select internal.object_id
from
(
values(1001) union all
values(1001) union all
values(1001) union all
values(1001) union all
values(1002) union all
values(1001) union all
values(1001) union all
values(1001) union all
values(1001) union all
values(1001) union all
values(1001) union all
values(1001) union all .......
values(9999)
) as internal(object_id)
) as external_ids,
object_master,
category_master,
object_category_mapping
where
external_ids.object_id = object_master.object_id and
external_ids.object_id = object_category_mapping.object_id and
object_master.object_id = object_category_mapping.object_id and
category_master.category_id = object_category_mapping.category_id
group by
category_master.category_name
order by
category_count desc
is much faster unfortunately connection.prepareStatement() is taking way too
much memory (both stack and heap - I have a constraint of 256 MB MAX memory
for my JVM) which goes beyond my applications resources. Is there a way I
can precompile some SQLs which are very expensive to parse during execution.
UNQUOTE:
On Tue, Apr 7, 2009 at 9:36 PM, Bryan Pendleton
<[email protected]>wrote:
> What version of Derby? What operating system? What version of Java?
>> [Arindam] 10.1.3.1; Windows XP; JRE 1.6
>>
>
> Definitely try using Derby 10.4.
>
> Also, Mike Segel made a bunch of other great suggestions in his mail, so
> I suggest following those and see where it gets you.
>
> thanks,
>
> bryan
>