necessarily lead to a better
> performance. Have you tried changing these values to something smaller and
> seeing the effects?
>
>
>
> Cheers,
>
> Murtadha
>
>
>
> *From: *Rana Alotaibi <ralot...@eng.ucsd.edu>
> *Date: *Monday, 29 January 2018 at 5:21 AM
> *T
the
> buffer cache budget. That should give you more than enough memory to
> execute on 39 cores.
>
> Cheers,
> Murtadha
>
> On 01/29/2018, 3:30 AM, "Mike Carey" <dtab...@gmail.com> wrote:
>
> + dev
>
>
> On 1/28/18 3:37 PM, Rana Alota
Hi all,
I would like to make AsterixDB utilizes all available CPU cores (39) that I
have for the following query:
USE mimiciii;
SET `compiler.parallelism` "39";
SET `compiler.sortmemory` "128MB";
SET `compiler.joinmemory` "265MB";
SELECT P.SUBJECT_ID
FROM LABITEMS I, PATIENTS P, P.ADMISSIONS
Rana
On Thu, Jan 25, 2018 at 11:24 PM, Rana Alotaibi <ralot...@eng.ucsd.edu>
wrote:
> Hi Chen,
>
> *How did you import data into the dataset? using "load" or "feed"?*
> I used "LOAD" (i.e USE mimiciii; LOAD DATASET PATIENTS USING localfs
> ((\&quo
om our developers.
>
> Just clarify two things: how did you import data into the dataset? using
> "load" or "feed"? And which version of AsterixDB are you using? But any way
> in your case it seems the join takes a lot of time, and your data is pretty
> much cache
Hi there,
I have a query that takes ~12.7mins on average (I have excluded the warm-up
time which was 30mins)!, and I would like to make sure that I didn't miss
any performance tuning parameters ( I have run the same query on MongoDB,
and it took ~2mins).
The query asks to find all patients that