Hi Su,
as per my understanding you can change the limit of 1000record from the 
interpreter section by setting up the value for variable 
"zeppelin.spark.maxResult", moon could you please confirm my understanding?
RegardsNihal
 


     On Thursday, 25 June 2015 10:00 AM, Su She <suhsheka...@gmail.com> wrote:
   

 Hello Everyone,
Excited to be making progress, and thanks for the community for providing help 
along the way.This stuff is all really cool.

Questions:
1) I noticed that the limit for the visual representation is 1000 results. Are 
there any short term plans to expand the limit? It seemed a little on the low 
side as many of the reasons for working with spark/hadoop is to work with large 
datasets.
2) When can I use the %sql function? Is it only on tables that have been 
registered? I have been having trouble registering tables unless I do:
// Apply the schema to the RDD.
val peopleSchemaRDD = sqlContext.applySchema(rowRDD, schema)

// Register the SchemaRDD as a table.
peopleSchemaRDD.registerTempTable("people")
I am having lots of trouble registering tables through HiveContext or even 
duplicating the Zeppelin tutorial, is this issue mitigated by using DataFrames 
( I am planning to move to 1.3 very soon)?

Bug:
When I do this:z.show(sqlContext.sql("select * from sensortable limit 100"))

I get the table, but I also get text results in the bottom, please see attached 
image. For some reason, if the image doesn't go through, i basically get the 
table, and everything works well, but the select statement also returns text 
(regardless of its 100 results or all)
 
Thank you !
Best,
Su

  

Reply via email to