[ 
https://issues.apache.org/jira/browse/DRILL-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17524062#comment-17524062
 ] 

ASF GitHub Bot commented on DRILL-8188:
---------------------------------------

paul-rogers commented on PR #2515:
URL: https://github.com/apache/drill/pull/2515#issuecomment-1102092604

   This PR is getting a bit complex with the bug or two that this PR uncovered. 
Iexplain a bit about how EVF2 works. There are two case: wildcard projection 
(SELECT *) and explicit projection (SELECT a, b, c). The way EVF2 works is 
different in these two cases.
   
   Then, for each reader, there are three other cases. The reader might know 
all its columns before the file is even opened. The PCAP reader is an example: 
all PCAP files have the same schema, so we don't need to look at the file to 
know the schema. The second case are files were we can learn the schema when 
opening the file. Parquet and CSV are examples: we can learn the Parquet schema 
from the file metadata, and CSV schema from the headers. The last case is where 
we don't know the schema until we read each row. JSON is the best example.
   
   So, now we have six cases to consider. This is why EVF2 is so complex!
   
   For the wildcard, EVF2 "discovers" columns as the reader creates them: 
either via the up-front schema, or as the reader reads data. In JSON, for 
example, we can discover a new column at any time. Once a column is added, EVF2 
will automatically fill in null values if values are missing. In the extreme 
case, it can fill in nulls for an entire batch. Because of the wildcard, all 
discovered columns are materialized and added to the result set. If reading 
JSON, and a column does not appear until the third batch, then the first two 
won't contain that column, but the third batch will have a schema change and 
will include the column. This can cause a problem for operators such as joins, 
sort or aggregation that have to store a collection of rows, not all can handle 
a schema change.
   
   Now, for the explicit schema case, EVF2 knows what columns the user wants: 
those in the list. EVF2 waits as long as it can, hoping the reader will provide 
the columns. Again, the reader can provide them up front, before the first 
record, or as the read proceeds (as in JSON.) As the reader provides each 
column, EVF2 has to decide: do we need that column? If so, we create a vector 
and a column writer: we materialize the column. If the column is not needed, 
EVF2 creates a dummy column writer.
   Now the interesting part. Suppose we get to the end of the first batch, the 
query wants column c, and the reader has never defined column c? What do we do? 
In this case, we have to make something up. Historically, Drill would make up a 
Nullable Int, with all-null values. EVF added the ability to specify the type 
for such columns, and we use that. If a provided schema is available, then the 
user tells us the type.
   
   Now we get to another interesting part. What if we guessed, say, Varchar, 
but the column later shows up as a JSON array? We're stuck: we can't go back 
and redo the old batches. We end up with a "hard" schema change. Bad things 
happen unless the query is really simple. This is the fun of Drill's schemaless 
system.
   
   With that background, we can try to answer your question. The answer is: it 
depends. If the reader says, "hey Mr. EVF2, here is the full schema I will 
read, I promise not to discover more columns", then EVF2 will throw an 
exception if later you say, "ha! just kidding. Actually, I discovered another 
one." I wonder if that's what is happening here.
   
   If, however, the reader left the schema open, and said, "here are the 
columns I know about now, but I might find more later", then EVF2 will expect 
more columns, and will handle them as above: materialize them if they are 
projected or if we have a wildcard, provide a dummy writer if we have explicit 
projection and the column is not projected.
   
   In this PR, we have two separate cases in the reader constructor.
   
   * In the `if `path, we define a "reader schema", and reserve the right to 
add more columns later. "That's what the `false` argument means to 
`tableSchema()`.
   * In the `else` path, we define no schema at all: we don't all 
`tableSchema()`.
   
   This means the reader is doing two entirely different things. In the `if` 
case, we define the schema and we just ask for column writers by name. In the 
`else` case, we don't define a schema, and we have to define the column when we 
ask for the column writers.
   
   This seems horribly complicated! I wonder, are we missing logic in the 
`then` case? Or, should there be two distinct readers, each of which implements 
one of the above cases?




> Convert HDF5 format to EVF2
> ---------------------------
>
>                 Key: DRILL-8188
>                 URL: https://issues.apache.org/jira/browse/DRILL-8188
>             Project: Apache Drill
>          Issue Type: Improvement
>    Affects Versions: 1.20.0
>            Reporter: Cong Luo
>            Assignee: Cong Luo
>            Priority: Major
>
> Use EVF V2 instead of old V1.
> Also, fixed a few bugs in V2 framework.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to