asautins commented on PR #52576:
URL: https://github.com/apache/spark/pull/52576#issuecomment-3576822893

   > What's the purpose of attempting to avoid this implicit conversion? Is it 
perf? If so lets see some perf numbers. I'm a little cautious about the 
repeated calling to inputMap keyArray and valueArray in the maptype case, my 
gut says this might actually be worse.
   
   I'm not sure I understand avoiding an implicit conversion.  
   
   The purpose of the change is to remove creating and populating of an array 
when returning an iterator would suffice.  This changes the eval method from 
O(n) to O(1) as well as removes the array allocation.  
   
   In regard to implicit conversion are you referring to how the array is 
currently populated with the foreach method for both the ArrayData and MapData 
types?  These methods are members of ArrayData and MapData respectively.  
Accessing the underlying keyArray and valueArray are done using the same get 
method as used in the foreach method.   The same  is done for accessing the 
ArrayData.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to