Hi,
It would depend on the data volume mainly. Hadoop can be used to refine the
data before inserting into a traditional architecture (like a database).
If you want to write jobs, several solutions have emerged :
* plain Mapred/Mapreduce APIs (former is older than the latter but both are
plain de
Hi Prashant
Welcome to Hadoop Community. :)
Hadoop is meant for processing large data volumes. Saying that, for your
custom requirements you should write your own mapper and reducer that
contains your business logic for processing the input data. Also you can
have a look at hive and pig, which ar