[
https://issues.apache.org/jira/browse/PIG-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13039754#comment-13039754
]
Mostafa Ead commented on PIG-200:
---------------------------------
Hello Daniel,
I am using pigmix2.patch now. It generates 625m records for the pages table,
which is too large compared to the available disk space on my cluster. I would
like to generate only 100m records of pages. Is there a ratio I should maintain
between the size of the pages table and the other tables; users and power users?
Thanks.
Mostafa Ead
> Pig Performance Benchmarks
> --------------------------
>
> Key: PIG-200
> URL: https://issues.apache.org/jira/browse/PIG-200
> Project: Pig
> Issue Type: Task
> Reporter: Amir Youssefi
> Assignee: Alan Gates
> Fix For: 0.2.0
>
> Attachments: generate_data.pl, perf-0.6.patch, perf.hadoop.patch,
> perf.patch, pigmix2.patch
>
>
> To benchmark Pig performance, we need to have a TPC-H like Large Data Set
> plus Script Collection. This is used in comparison of different Pig releases,
> Pig vs. other systems (e.g. Pig + Hadoop vs. Hadoop Only).
> Here is Wiki for small tests: http://wiki.apache.org/pig/PigPerformance
> I am currently running long-running Pig scripts over data-sets in the order
> of tens of TBs. Next step is hundreds of TBs.
> We need to have an open large-data set (open source scripts which generate
> data-set) and detailed scripts for important operations such as ORDER,
> AGGREGATION etc.
> We can call those the Pig Workouts: Cardio (short processing), Marathon (long
> running scripts) and Triathlon (Mix).
> I will update this JIRA with more details of current activities soon.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira