>>> For example, I have event called add_photo, from which I want to
>>>>> calculate
>>>>> trending tags for added photos for last x minutes. Then I'd like to
>>>>> aggregate that by country, etc. I've built the streaming part, which
>>&
like to
>>>> aggregate that by country, etc. I've built the streaming part, which
>>>> reads
>>>> from Kafka, and calculates needed results and get appropriate RDDs, the
>>>> question is now how to connect it to UI.
>>>>
>>>>
, and calculates needed results and get appropriate RDDs, the
>>> question is now how to connect it to UI.
>>>
>>> Is there any general practices on how to pass parameters to spark from
>>> some
>>> custom built UI, how to organize data retrieval, what interme
to UI.
>>>
>>> Is there any general practices on how to pass parameters to spark from
>>> some
>>> custom built UI, how to organize data retrieval, what intermediate
>>> storages
>>> to use, etc.
>>>
>>> Thanks in advance.
>>&
>> Is there any general practices on how to pass parameters to spark from
>> some
>> custom built UI, how to organize data retrieval, what intermediate
>> storages
>> to use, etc.
>>
>> Thanks in advance.
>>
>>
>>
>>
>> --
>
any general practices on how to pass parameters to spark from some
> custom built UI, how to organize data retrieval, what intermediate storages
> to use, etc.
>
> Thanks in advance.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.100156
e, etc.
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Visualizing-Spark-Streaming-data-tp22160.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
Hello Everyone,
I wanted to hear the community's thoughts on what (open - source) tools
have been used to visualize data from Spark/Spark Streaming? I've taken a
look at Zepellin, but had some trouble working with it.
Couple questions:
1) I've looked at a couple blog posts and it seems like spar