Hi,

My case is very similar to what is described in this link of Spark:
http://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema

I hope this clarifies it.

Thanks,
Luqman

On Mon, Feb 13, 2017 at 12:04 PM, Tzu-Li (Gordon) Tai <tzuli...@apache.org>
wrote:

> Hi Luqman,
>
> From your description, it seems like that you want to infer the type (case
> class, tuple, etc.) of a stream dynamically at runtime.
> AFAIK, I don’t think this is supported in Flink. You’re required to have
> defined types for your DataStreams.
>
> Could you also provide an example code of what the functionality you have
> in mind looks like?
> That would help clarify if I have misunderstood and there’s actually a way
> to do it.
>
> - Gordon
>
> On February 12, 2017 at 4:30:56 PM, Luqman Ghani (lgsa...@gmail.com)
> wrote:
>
> Like if a file has a header: id, first_name, last_name, last_login
> and we infer schema as: Int, String, String, Long
>
>

Reply via email to