jvanstraten commented on a change in pull request #12279:
URL: https://github.com/apache/arrow/pull/12279#discussion_r796665448



##########
File path: format/substrait/extension_types.yaml
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# substrait::{ExtensionTypeVariation, ExtensionType}s
+# for wrapping types which appear in the arrow type system but
+# are not first-class in substrait. These include:
+# - null
+# - unsigned integers
+# - half-precision floating point numbers
+# - 32 bit times and dates
+# - timestamps with units other than microseconds
+# - timestamps with timezones other than UTC
+# - 256 bit decimals
+# - sparse and dense unions
+# - dictionary encoded types
+# - durations
+# - string and binary with 64 bit offsets
+# - list with 64 bit offsets
+# - interval<months: i32>
+# - interval<days: i32, millis: i32>
+# - interval<months: i32, days: i32, nanos: i64>
+# - arrow::ExtensionTypes
+
+# FIXME these extension types are not parameterizable, which means among
+# other things that we can't declare dictionary type here at all since
+# we'd have to declare a different dictionary type for all encoded types
+# (but that is an infinite space). Similarly, do we need to declare a
+# timestamp variation for all possible timezone strings?
+#
+# Ultimately these declarations are a promise which needs to be backed by
+# equivalent serde in c++. For example, consider u8: when serializing to
+# substrait, we need to wrap instances of arrow::uint8 into the type
+# variation listed below. It would be ideal if we could SinglePointOfTruth
+# this correspondence; either generating c++ from the YAML or YAML from the
+# c++.
+#
+# At present (AFAICT) it's not valid to make this user extensible because
+# even if a user adds their custom scalar function to the registry *and*
+# defines the mapping from that scalar function to a 
substrait::ExtensionFunction
+# the corresponding YAML doesn't exist at any URI and so it can't be used in
+# substrait. Perhaps we could still help that case by providing a tool to
+# generate YAML from functions; that'd simplify the lives of people trying to
+# write arrow::compute::Functions to "define the function and if you want to
+# reference it from substrait generate this YAML and put it at some URI".
+#
+# In any case for the foreseeable future generation would be far too brittle;
+# URIs will not be accessed by anything but humans and the YAML is effectively
+# structured documentation. Thus extenders should pass the URI in the same way
+# they pass a description string; it's opaque to anything in arrow.

Review comment:
       I think what Ben is trying to say here is that *currently* there is 
nothing that reads these YAMLs yet, so people are just blindly writing the 
files manually, and thus it makes no sense to try to correctly generate them 
yet. This kind of ties into 
https://github.com/substrait-io/substrait/issues/131, which I opened a few days 
ago.
   
   However, as things stand, I think it *is* necessary that these URIs are 
resolvable to a valid YAML file. For example, the YAML file can define function 
overloads for a particular function name, with possibly different 
implementations, and even different return types by using type expressions. 
Thus, the semantics of a plan become dependent on Substrait's type system, 
which in turn depends on which prototypes are defined in the YAMLs. AFAIK 
though, the spec doesn't define whether:
   
    - it's a producer's responsibility to only emit functions/types that a 
producer can handle (this seems to be what we're assuming right now, i.e. the 
producer has to conform to whatever YAML we generate, but how could it do that? 
that means that a Substrait representation of a query is not agnostic of the 
consumer);
    - it's the responsibility of a consumer to make sense of whatever the 
producer throws at it (which would have been my assumption personally, in which 
case I would strongly argue that we need a dynamic registry for these things, 
rather than relying on C++ metaprogramming -- after all, someone may want to 
register a mapping for their particular use case from Python -- and also it 
would mean that we may have to read YAML rather than generate it, though if 
we're lax about type-checking the incoming plan we may be able to avoid reading 
them); or
    - whether we're to expect there to be some external tool that converts from 
one function/type set to another (so, for our purposes, this would be the same 
as "producer's responsibility," but I don't know if this is a reasonable 
expectation).
   
   Similarly, AFAIK the spec also doesn't specify who's supposed to define 
these YAML files. Consumers, for whatever they support? Producers, for whatever 
they want? Some third party somehow, for whatever function they want to exist? 
Obviously I can't just define a function named `i32 
compute_answer_to_life_the_universe_and_everything()` in some YAML file stored 
on my local PC and expect Arrow's compute engine to yield 42, but it's unclear 
to me *where* in the pipeline this is supposed to fail (or equivalently, which 
parts of the pipeline I'd have to modify to make it work). Until these sorts of 
questions are unambiguously answered by the Substrait spec (and maybe they 
already are and I just haven't seen that bit of the spec yet) I don't think it 
makes sense to discuss details of how Arrow in particular is to approach this 
problem.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to