Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/19270
  
    The main change I'm talking about is 
https://issues.apache.org/jira/browse/SPARK-20657 (code at 
https://github.com/vanzin/spark/pull/41). I did not change the format of the 
tables, but how data is loaded into them. The new backing store is much faster 
at sorting than the current code; and it tries to only load the data that will 
be shown, so it's also light on memory.
    
    The only think it doesn't support, as I mentioned, is searching, but that 
could be added at a cost. (And it currently doesn't cache metrics, so each page 
load scans all metrics which is a little slow, but an order of magnitude faster 
than the numbers shown above.)
    
    I'm not a fan of the tables currently, nor an I saying they should stay as 
is. But my main concern here is really SHS memory usage. Your point of hitting 
the rest api is valid, and I think it should be considered a bug that no 
default limit is imposed for large lists.
    
    The way we've generally solved the "arbitrarily large" lists in other apps 
here is to use infinite scrolling + server side search. I'm not really a front 
end dev so I don't know what the current opinions on that approach is, but it's 
definitely lighter as far as memory usage on the server side goes, and load 
times on the client side are pretty fast too.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to