Hi, I have a hadoop application where each run of the map could potentially generate large amount of key value pairs, so it caused the run of memory error. I am wondering if there is a way to inform hadoop to write the key value pairs to disk periodically? thanks, Eric Zhang Vespa content @Yahoo! Work: 408-349-2466
- how to deal with large amount of key value pair outputs in... Eric Zhang
- Re: how to deal with large amount of key value pair o... Toby DiPasquale
- Re: how to deal with large amount of key value pair o... Arun C Murthy
- RE: how to deal with large amount of key value pa... Eric Zhang
- Re: how to deal with large amount of key valu... Owen O'Malley
- RE: how to deal with large amount of key ... Eric Zhang
