[ 
https://issues.apache.org/jira/browse/PIG-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005707#comment-13005707
 ] 

Alan Gates commented on PIG-1891:
---------------------------------

When we redesigned the load and store interfaces in 0.7 we made a design 
decision to not duplicate Hadoop functionality, but to be as thin a layer as 
possible.  Of course where there are things everyone will want to do, it makes 
sense to make those easier and deal with a little duplication.  My sense is 
that this is not one of those cases.  But if we see many others voting for this 
feature, I could be convinced that this would make sense.  I will leave this 
JIRA open for now to see how others vote.  Though I will change the priority to 
minor.

I will also forward this information to Corrine (who writes our docs).  She may 
want to include it in her section on store functions.


> Enable StoreFunc to make intelligent decision based on job success or failure
> -----------------------------------------------------------------------------
>
>                 Key: PIG-1891
>                 URL: https://issues.apache.org/jira/browse/PIG-1891
>             Project: Pig
>          Issue Type: New Feature
>            Reporter: Alex Rovner
>
> We are in the process of using PIG for various data processing and component 
> integration. Here is where we feel pig storage funcs lack:
> They are not aware if the over all job has succeeded. This creates a problem 
> for storage funcs which needs to "upload" results into another system:
> DB, FTP, another file system etc.
> I looked at the DBStorage in the piggybank 
> (http://svn.apache.org/viewvc/pig/trunk/contrib/piggybank/java/src/main/java/org/apache/pig/piggybank/storage/DBStorage.java?view=markup)
>  and what I see is essentially a mechanism which for each task does the 
> following:
> 1. Creates a recordwriter (in this case open connection to db)
> 2. Open transaction.
> 3. Writes records into a batch
> 4. Executes commit or rollback depending if the task was successful.
> While this aproach works great on a task level, it does not work at all on a 
> job level. 
> If certain tasks will succeed but over job will fail, partial records are 
> going to get uploaded into the DB.
> Any ideas on the workaround? 
> Our current workaround is fairly ugly: We created a java wrapper that 
> launches pig jobs and then uploads to DB's once pig's job is successful. 
> While the approach works, it's not really integrated into pig.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to