and since Flink provides
a comprehensive model for type information it may be good to look for such
XML functions.
I did not find any InputFormat for XML too, did I miss something?
Thanks in advance, all the best
François Lacombe
DCbrain
--
<http://www.dcbrain.com/> <https://tw
anyone interested, all the best
François Lacombe
DCbrain
--
<http://www.dcbrain.com/> <https://twitter.com/dcbrain_feed?lang=fr>
<https://www.linkedin.com/company/dcbrain>
<https://www.youtube.com/channel/UCSJrWPBLQ58fHPN8lP_SEGw>
Pensez à la
planète, imprimer ce papier que si nécessaire
a/org/apache/flink/formats/avro/AvroRowDeSerializationSchemaTest.java
> [2] https://issues.apache.org/jira/browse/FLINK-11569
>
> On Fri, Feb 8, 2019 at 8:51 AM françois lacombe <
> francois.laco...@dcbrain.com> wrote:
>
>> Hi Rong,
>>
>> Thank you f
t;
> Best, Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/sourceSinks.html#define-a-tablefactory
>
>
> Am Mo., 11. Feb. 2019 um 21:09 Uhr schrieb françois lacombe <
> francois.laco...@dcbrain.com>:
>
>> Hi Fabian,
>&g
Congratulation Thomas
Thanks for help you provide and useful inputs
François
Le mer. 13 févr. 2019 à 03:13, Kurt Young a écrit :
> Congrats Thomas!
>
> Best,
> Kurt
>
>
> On Wed, Feb 13, 2019 at 10:02 AM Shaoxuan Wang
> wrote:
>
>> Congratulations, Thomas!
>>
>> On Tue, Feb 12, 2019 at 5:59
implementation, but usually
> the format reads the split as a stream and does not read the split as a
> whole before emitting records.
>
> Best,
> Fabian
>
> Am Mo., 4. Feb. 2019 um 12:06 Uhr schrieb françois lacombe <
> francois.laco...@dcbrain.com>:
>
>>
Hi all,
An error is currently raised when using table.insertInto("registeredSink")
in Flink 1.7.0 when types of table and sink don't match.
I've got the following :
org.apache.flink.table.api.ValidationException: Field types of query result
and registered TableSink null do not match.
Query
pache.org/projects/flink/flink-docs-release-1.3/dev/table/sourceSinks.html#table-sources-sinks
>
> On Wed, Feb 6, 2019 at 3:06 AM françois lacombe <
> francois.laco...@dcbrain.com> wrote:
>
>> Hi all,
>>
>> I currently get a json string from my pgsql source with nested
Hi all,
I currently get a json string from my pgsql source with nested objects to
be converted into Flink's Row.
Nested json objects should go in nested Rows.
An avro schema rules the structure my source should conform to.
According to this json :
{
"a":"b",
"c":"d",
"e":{
"f":"g"
ibuted to tasks for reading.
> How a task reads a file split depends on the implementation, but usually
> the format reads the split as a stream and does not read the split as a
> whole before emitting records.
>
> Best,
> Fabian
>
> Am Mo., 4. Feb. 2019 um 12:06 Uhr sch
ile-based input format to a directory and the input
> format should read all files in that directory.
> That works as well for TableSources that are internally use file-based
> input formats.
> Is that what you are looking for?
>
> Best, Fabian
>
> Am Mo., 28. Jan. 2019 um 1
for BatchTableSource, I guess
the cost to make them available for streaming would be quite expensive for
me for the moment.
Have someone ever done this?
Am I wrong to expect doing so with a batch job?
All the best
François Lacombe
--
<http://www.dcbrain.com/> <https://twitter.com/dcbrain_fee
ink-docs-release-1.6/dev/table/
> tableApi.html
> [2] https://ci.apache.org/projects/flink/flink-docs-
> release-1.6/dev/table/sql.html
>
> 2018-09-05 12:22 GMT+02:00 françois lacombe
> :
>
>> Hi all,
>>
>> I'm trying to use CONVERT or CAST functions
Hi all,
I'm trying to use CONVERT or CAST functions from Calcite docs to query some
table with Table API.
https://calcite.apache.org/docs/reference.html
csv_table.select("col1,CONCAT('field1:',col2,',field2:',CAST(col3 AS
string))");
col3 is actually described as int the CSV schema and CONCAT
I think I will publish it next week.
>
> Regards,
> Timo
>
>
> Am 31.08.18 um 08:36 schrieb françois lacombe:
>
> Hi Timo
>
> Yes it helps, thank you.
> I'll start building such an utility method. Are you interested to get the
> source?
>
> According to mapping h
gt;
> I hope this helps.
>
> Regards,
> Timo
>
> Am 31.08.18 um 07:40 schrieb françois lacombe:
>
> Hi all,
>>
>> Today I'm looking into derivating an Avro schema json string into a
>> Schema object.
>> In the overview of https://ci.apache.org/projects
Hi all,
Today I'm looking into derivating an Avro schema json string into a Schema
object.
In the overview of
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/connect.html
Avro is used as a format and never as a schema.
This was a topic in JIRA-9813
I can get a TableSchema
;> Hi francois,
>>
>> Maybe you can refer to the comments of this source code?[1]
>>
>> https://github.com/apache/flink/blob/master/flink-
>> libraries/flink-table/src/main/scala/org/apache/flink/table/api/
>> BatchTableEnvironment.scala#L143
>>
>
François
2018-08-28 4:37 GMT+02:00 vino yang :
> Hi Francois,
>
> Yes, the withFormat API comes from an instance of BatchTableDescriptor,
> and the BatchTableDescriptor instance is returned by the connect API, so
> you should call BatchTableEnvironment#connect first.
>
> Thank
Hi all,
I'm currently trying to load a CSV file content with Flink 1.6.0 table API.
This error is raised as a try to execute the code written in docs
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/connect.html#csv-format
ExecutionEnvironment env =
https://ci.apache.org/projects/flink/flink-docs-
> release-1.6/dev/table/common.html#convert-a-table-into-a-
> datastream-or-dataset
>
>
>
> On Fri, Aug 24, 2018 at 8:04 AM françois lacombe <
> francois.laco...@dcbrain.com> wrote:
>
>> Hi Timo,
>>
>> Than
t; Regards,
> Timo
>
>
> Am 23.08.18 um 18:54 schrieb françois lacombe:
>
> Hi all,
>>
>> I'm looking for best practices regarding Tuple instances creation.
>>
>> I have a TypeInformation object produced by AvroSchemaConverter.convertToT
>> ype
ling class
with parametrized types.
My goal is to parse several CsvFiles with different structures described in
an Avro schema.
It would be great to not hard-code structures in my Java code and only get
types information at runtime from Avro schemas
Is this possible?
Thanks in advance
François Lacombe
I have different csv files?
Thanks in advance for your input, all the best
François Lacombe
avro schemas a better role in Flink in
> further versions?
> Haven't heard about avro for csv. You can open a jira for it. Maybe also
> contribute to flink :-)
>
>
> On Tue, Jul 10, 2018 at 11:32 PM, françois lacombe <
> francois.laco...@dcbrain.com> wrote:
>
>>
ow new NoSuchFieldException(field_nfo.getName());
}
// Declare the field in the source Builder
src_builder.field(field_nfo.getName(),
primitiveTypes.get(field_nfo.getType()));
}
All the best
François
> On Mon, Jul 9, 2018 at 11:03 PM, françois lacombe <
> francois.laco...@dcbrain.com> wro
/formats/
> avro/AvroInputFormat.java
>
> On Fri, Jul 6, 2018 at 11:32 PM, françois lacombe <
> francois.laco...@dcbrain.com> wrote:
>
>> Hi Hequn,
>>
>> The Table-API is really great.
>> I will use and certainly love it to solve the issues I mentioned b
?
Big thank to put on the table-api's way :)
Best R
François Lacombe
2018-07-06 16:53 GMT+02:00 Hequn Cheng :
> Hi francois,
>
> If I understand correctly, you can use sql or table-api to solve you
> problem.
> As you want to project part of columns from source, a columnar storage
ossible or if I have to change my mind about
this?
Thanks in advance, all the best
François Lacombe
29 matches
Mail list logo