I guess it comes back to the questions about "how big a pig script is". If we are only considering 5-line pig scripts, where you do load exactly what you need to compute, crush numbers and dump them, I agree it does not make much sense.
If one start thinking about something more ETL-ish (which I understand is not exactly the main purpose of pig) then one could want to use pig to "move" data around or load data from somewhere, do something "heavy" that ETL software can just not cope with efficiently enough — build index, process images, whatever — and store the results somewhere else, a scenario where there can be fields that pig will just forward, without playing with.
I admit my background where we were using the same software for ETL- like stuff and heavy processing (that is, mostly building index) may give me very a biaised opinion about pig and what it should be. But I would definitely like to use pig for what it is/will be excellent for, as well as for stuff where it will be just ok.
So I still think the extension point is worth having. Half my brain is already thinking about way of cheating and using Alan's fields list to pass other stuff around...
Another concrete example and I stop bothering you all, then :) In our tools, we are using some field metadata to denote that a field content is a primary key to a record. When we copy this field values to somewhere else, we automaticaly tag them as foreign key (instead of primary). When we dump the data on disk (to a final-user CDROM image in most cases) the fact that the column refers to a table present on the disk too can be automagically stored as it is a feature of our final format : without having the application developper re-specifying the relations, the "UDF store equivalent" is clever enough to store the information.
The script the application developper who prepare a CDROM can be several screen long, with bits spread on separate files. The data model could be quite complex too. In this context, it is important that things like "this field acts as a record key" are said once.
Le 30 mai 08 à 16:13, pi song a écrit :
More, adding meta data is conceptually adding another way to parameterizeload/store functions. Making UDFs to be parameterized by other UDFstherefore is also possible functionally but I just couldn't think of anygood use cases. On Sat, May 31, 2008 at 12:09 AM, pi song <[EMAIL PROTECTED]> wrote:Just out of curiosity. If you say somehow the UDF store in your example can "learn" from UDF load. That information still might not be useful because between "load" and "store", you've got processing logic which might or might not alter the validity of information directly transfered from "load" to "store". An example would be I do load a list of number and then I convert to string. Then information on the UDF store side is then not applicable.Don't you think the cases where this concept can be useful is very rare?PiOn Fri, May 30, 2008 at 11:44 PM, Mathieu Poumeyrol <[EMAIL PROTECTED]>wrote:Pi,Well... I was thinking... the three of them actually. Alan's list is quite comprehensive, so it is not that easy to find a counvincing example, but I'm sure UDF developper may need some additional information to communicatemetadata from one UDF to another.It does not make sense if you think "one UDF function", but it is a way tohave two coordinated UDF communicating.For instance the developper of a jdbc pig "connector" will typically write a UDF load, and a UDF store. What if he wants the loader to discover the field collection (case 3, Self describing data in Alan's page) from jdbc and propagate the exact column type of a given field (as in "VARCHAR(42)"), to create it the right way in the UDF store ? or the table name ? or the fact that a column is indexed, a primary key, a foreign key constraint, some encoding info... He may also want to develop a UDF pipeline function that would perform some foreign key validation against the database at some point in his script. Having the information in the metadata may be usefull.Some other fields of application we can not think of today may need some completely different metadata. My whole point is: Pig should provide somemetadata extension point. Le 30 mai 08 à 13:54, pi song a écrit :I don't get it Mathieu. UDF is a very broad term. It could be UDF Load,UDF Store, or UDF as function in pipeline. Can you explain a bit more?On Fri, May 30, 2008 at 9:14 PM, Mathieu Poumeyrol <[EMAIL PROTECTED] >wrote: All,Looking at the very extensive list of types of file specificic metadata,Ithink (from experience) that a UDF function may need to attach some information (any information, actualy) to a given field (or file) to beretrieved by another UDF downstream.What about adding a Map<String, Serializable> to each file and eachfield ? -- Mathieu Le 30 mai 08 à 01:24, pi song a écrit : Alan,I will start thinking about this as well. When do you want to start theimplementation? Pi On 5/29/08, Apache Wiki <[EMAIL PROTECTED]> wrote:Dear Wiki user,You have subscribed to a wiki page or wiki category on "Pig Wiki" forchange notification. The following page has been changed by AlanGates: http://wiki.apache.org/pig/PigMetaData ------------------------------------------------------------------------------ information, histograms, etc. == Pig Interface to File Specific Metadata == - Pig should support four options with regard to file specific metadata:+ Pig should support four options with regard to reading file specificmetadata:1. No file specific metadata available. Pig uses the file as inputwithno knowledge of its content. All data is assumed to be ! ByteArrays.2. User provides schema in the script. For example, `A = load 'myfile' as (a: chararray, b: int);`.3. Self describing data. Data may be in a format that describes the schema, such as JSON. Users may also have other proprietary ways tostoreinformation about the data in a file either in the file itself or inanassociated file. Changes to the !LoadFunc interface made as part ofthepipeline rework support this for data type and column layout only. Itwillneed to be expanded to support other types of information about thefile. 4. Input from a data catalog. Pig needs to be able to query an external data catalog to acquire information about a file. All the same informationavailable in option 3 should be available via this interface. Thisinterface does not yet exist and needs to be designed.+ It should support options 3 and 4 for writing file specific metadataas well. + == Pig Interface to Global Metadata ==- An interface will need to be designed for pig to interface to anexternal data catalog.+ An interface will need to be designed for pig to read from and writeto an external data catalog. == Architecture of Pig Interface to External Data Catalog ==Pig needs to be able to connect to various types of external datacatalogs(databases, catalogs stored in flat files, web services, etc.). Tofacilitate this- pig will develop a generic interface that allows it to make specific types of queries to a data catalog. Drivers will then need to bewritten to implement+ pig will develop a generic interface that allows it to query andupdate a data catalog. Drivers will then need to be written to implement that interface and connect to a specific type of data catalog. == Types of File Specific Metadata Pig Will Use ==- Pig should be able to acquire the following types of informationabout afile via either self description or an external data catalog. This isnot to say+ Pig should be able to acquire and record the following types of information about a file via either self description or an externaldata catalog. This is not to saythat every self describing file or external data catalog must supporteveryone of these items. This is a list of items pig may find useful andshould be - able to query for. If the metadata source cannot provide the information, pig will simply not make use of it.+ able to query for and create. If the metadata source cannot provideorstore the information, pig will simply not make use of it or recordit. * Field layout (already supported) * Field types (already supported) * Sortedness of the data, both key and direction (ascending/descending) @@ -52, +54 @@ == Priorities ==Given that the usage for global metadata is unclear, the priority willbeplaced on supporting file specific metadata. The first step should beto define the- interface changes in !LoadFunc and the interface to external datacatalogs.+ interface changes in !LoadFunc, !StoreFunc and the interface toexternal data catalogs.
