Thanks for the feedback! I did think of something along these lines. I abandoned the idea because as you said, it looks like it could get pretty inefficient especially when there are several sets of duplicate keys. Our data could have dozens of duplicate keys, so N could be a potentially large number, and having to do N additional joins for a single join operation might be impractical in our case...not to mention the added complexity of the filtering...
It just seems like there ought to be an easier way? On Dec 28, 11:58 am, asgallant <[email protected]> wrote: > This may be horrendously complicated to write or really inefficient in > execution, but my first thought is to filter dt2 into two or more views, > such that each view contains only unique keys. Join each view (separately) > with dt1, producing N joined DataTables, then manually merge the joined > tables together (may be necessary to remove duplicate rows if you are not > doing inner joins). -- You received this message because you are subscribed to the Google Groups "Google Visualization API" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-visualization-api?hl=en.
