Github user jose-torres commented on the issue:

    https://github.com/apache/spark/pull/20726
  
    I think both failures don't reflect a deeper problem:
    
    * The first was because columnar batch scans silently shortcutted around 
the readerFactories lazy val, and would error out if it were called. I fixed 
this, but the resulting code is kinda messy, so let me know if you see a better 
way to do it.
    * The second was a bit of fragile logic, basically measuring which 
streaming metric interval every data source method gets called in. The change 
does indeed move reader factory creation from addBatch to queryPlanning; this 
is expected and non-problematic.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to