[
https://issues.apache.org/jira/browse/PHOENIX-2240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14735645#comment-14735645
]
James Taylor commented on PHOENIX-2240:
---------------------------------------
One easy fix is to just document it. The performance script that generates rows
is fine if it **approximately** creates that many rows. 500 less over 100M is
not going to impact any performance testing. If this affects Pherf (which
supports functional testing too), then we should fix it there, though IMO.
[[email protected]], [~mujtabachohan]
> Duplicate keys generated by performance.py script
> -------------------------------------------------
>
> Key: PHOENIX-2240
> URL: https://issues.apache.org/jira/browse/PHOENIX-2240
> Project: Phoenix
> Issue Type: Bug
> Reporter: Mujtaba Chohan
> Assignee: Mujtaba Chohan
> Priority: Minor
>
> 500 out of 100M rows are duplicate. See details at
> http://search-hadoop.com/m/9UY0h26jwA21rW0i1/v=threaded
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)