westonpace commented on a change in pull request #12279:
URL: https://github.com/apache/arrow/pull/12279#discussion_r797964840



##########
File path: format/substrait/extension_types.yaml
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# substrait::{ExtensionTypeVariation, ExtensionType}s
+# for wrapping types which appear in the arrow type system but
+# are not first-class in substrait. These include:
+# - null
+# - unsigned integers
+# - half-precision floating point numbers
+# - 32 bit times and dates
+# - timestamps with units other than microseconds
+# - timestamps with timezones other than UTC
+# - 256 bit decimals
+# - sparse and dense unions
+# - dictionary encoded types
+# - durations
+# - string and binary with 64 bit offsets
+# - list with 64 bit offsets
+# - interval<months: i32>
+# - interval<days: i32, millis: i32>
+# - interval<months: i32, days: i32, nanos: i64>
+# - arrow::ExtensionTypes
+
+# FIXME these extension types are not parameterizable, which means among
+# other things that we can't declare dictionary type here at all since
+# we'd have to declare a different dictionary type for all encoded types
+# (but that is an infinite space). Similarly, do we need to declare a
+# timestamp variation for all possible timezone strings?
+#
+# Ultimately these declarations are a promise which needs to be backed by
+# equivalent serde in c++. For example, consider u8: when serializing to
+# substrait, we need to wrap instances of arrow::uint8 into the type
+# variation listed below. It would be ideal if we could SinglePointOfTruth
+# this correspondence; either generating c++ from the YAML or YAML from the
+# c++.
+#
+# At present (AFAICT) it's not valid to make this user extensible because
+# even if a user adds their custom scalar function to the registry *and*
+# defines the mapping from that scalar function to a 
substrait::ExtensionFunction
+# the corresponding YAML doesn't exist at any URI and so it can't be used in
+# substrait. Perhaps we could still help that case by providing a tool to
+# generate YAML from functions; that'd simplify the lives of people trying to
+# write arrow::compute::Functions to "define the function and if you want to
+# reference it from substrait generate this YAML and put it at some URI".
+#
+# In any case for the foreseeable future generation would be far too brittle;
+# URIs will not be accessed by anything but humans and the YAML is effectively
+# structured documentation. Thus extenders should pass the URI in the same way
+# they pass a description string; it's opaque to anything in arrow.

Review comment:
       > I think it is necessary that these URIs are resolvable to a valid YAML 
file. For example, the YAML file can define function overloads for a particular 
function name, with possibly different implementations, and even different 
return types by using type expressions
   
   To some degree I think this is valid.  I don't think it would be valid if 
the YAML defined `ADD(int32, int32) -> bool` because that would change the 
semantics of the operation.  Either way, this is still perfectly fine and 
doesn't require programmatic consumption of the YAML.
   
   For example, imagine a particular consumer never wanted to handle overflow 
and so they defined `ADD(int32, int32) -> int64`.  A producer then calls 
`ADD(int32, int32) -> int32`.  The consumer, at that point, would reject the 
plan.  Neither the producer or the consumer need to consume the YAML to do this.
   
   Or for your example, the producer could reject the plan because it doesn't 
recognize `compute_answer_to_life_the_universe_and_everything`.  Or the 
producer could simply emit a warning or it could just blindly accept it (it 
would need the user to specify the return type when creating the plan).
   
   The producer could blindly send it to the consumer or it could try and 
validate the consumer can handle it.
   
   The consumer would reject the plan if it contained this function because it 
does not recognize it.
   
   > it's a producer's responsibility to only emit functions/types that a 
producer can handle
   
   I'm assuming you meant "emit functions/types that a consumer can handle" or 
it's possible I don't understand you.  I don't agree with this branch.  A wise 
producer may be able to negotiate and polyfill (see later) but that is by no 
means a requirement.  If the producer sends something the consumer can't 
consume then the consumer just rejects it.
   
   > it's the responsibility of a consumer to make sense of whatever the 
producer throws at it
   
   I'm not sure what you mean here exactly.  Basically, a consumer's 
responsibility is to reject the plan if it can't satisfy exactly what the 
producer is asking for.  A consumer might need to do some ineffecient 
workarounds to satisfy the plan and that's ok.  Or it could reject the plan.  
Or it could have a flag `allow_inefficient_workarounds`.
   
   The only thing that is really concrete is that the consumer either does 
exactly what the producer is asking for (semantically speaking) or it rejects 
the plan.  The only true "failure condition" is where we accept the plan but 
don't fulfill the producer's request (e.g. we return 0 or null for the 
`compute_answer_to_life_the_universe_and_everything` function).
   
   > whether we're to expect there to be some external tool that converts from 
one function/type set to another (so, for our purposes, this would be the same 
as "producer's responsibility," but I don't know if this is a reasonable 
expectation).
   
   Jacques has mentioned this idea a few times.  Specifically using the 
terminology "polyfill" (see 
https://github.com/substrait-io/substrait/discussions/31#discussioncomment-1354979).
  It's a nice-to-have but I don't think we really if it exists at the moment.
   
   > but it's unclear to me where in the pipeline this is supposed to fail (or 
equivalently, which parts of the pipeline I'd have to modify to make it work)
   
   In discussions with Jacques there have been occasional mentions of an 
optional handshake of sorts between producers and consumers.  That's not really 
fleshed out as far as I know.
   
   ---
   
   Summarizing all that:
   
   * I think it will always be a valid path that the YAML either does not exist 
or is not publicly hosted.  This will interfere with some optional steps (e.g. 
"a middleman validator couldn't validate the plan because it used a function we 
don't know about") but it should not interfere with the ability to run the plan.
   * I do believe that automatic YAML generation is our best long term solution.
   * I don't think there is much benefit in doing automatic YAML generation 
right now
   * These comments do not belong in the YAML doc.  We can take any of these 
notes we feel are important and add them to ARROW-15535 (which I've just 
created).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to