Also Google Cloud Bigtable has such a page at
https://cloud.google.com/bigtable/docs/integrations
On Thu, Jul 27, 2017 at 6:57 PM, Robert Yokota wrote:
>
> One thing I really appreciate about HBase is its flexibility. It doesn't
> enforce a schema, but also doesn't prevent
One thing I really appreciate about HBase is its flexibility. It doesn't
enforce a schema, but also doesn't prevent you from building a schema layer
on top. It is very customizable, allowing you to push arbitrary code to
the server in the form of filters and coprocessors.
Not having such
Interesting blog.
>From your experience, is there anything on hbase side which you see room
for improvement ?
Which hbase release are you using ?
Cheers
On Thu, Jul 27, 2017 at 3:11 PM, Robert Yokota wrote:
> In case anyone is interested, I wrote a blog on how to analyze
In case anyone is interested, I wrote a blog on how to analyze graphs
stored in HBase with Apache Flink Gelly:
https://yokota.blog/2017/07/27/graph-analytics-on-hbase-
with-hgraphdb-and-apache-flink-gelly/
Hello,
We´re having lot of issues while trying to submit a job remotely using the
Flink CLI command line tool. We have tried different configurations but in all
of them we get errors from AKKA while trying to connect. I will try to
summarise the configurations we´ve tried.
- Flink 1.3.0
Hello,
This is the system log, not the task manager's application log, which is
what I was referring to.
If you're using the standalone cluster, then the task manager log should
be in the logs directory, inside your Flink's installation
Regards,
Kien
On 7/27/2017 3:49 PM, Shashwat Rastogi
Hi Wei,
what do you mean be "windowedStream"?
The result of
dataStream.join(otherStream).where().equalTo()
expects a window to be specified. In each window, based on the time and window
characteristics you defined, both sources will collect elements that fit into
the window and, at its end,
Hi Gordeon, Thx for your reply, already implemented ;)
> On 27 Jul 2017, at 12:57, Tzu-Li (Gordon) Tai wrote:
>
> Hi!
>
> Yes, you can provide a custom writer for the BucketingSink via
> BucketingSink#setWriter(…).
> The AvroKeyValueSinkWriter is a simple example of a
Hi!
Yes, you can provide a custom writer for the BucketingSink via
BucketingSink#setWriter(…).
The AvroKeyValueSinkWriter is a simple example of a writer that uses Avro for
serialization, and takes as input KV 2-tuples.
If you want to have a writer that takes as input your own event types,
Hi Kien,Sorry it took me sometime to fetch the logs. I am attaching logs of the machine which died due to lack of free memory. Jun 26 13:00:01 staging-east-dataplatform-02-c01 systemd: Created slice
user-0.slice.
Jun 26 13:00:01 staging-east-dataplatform-02-c01 systemd: Starting user-0.slice.
Jun
Hi Egor,
There is the Row type which is not strongly typed (such as TupleX) but
supports arbitrary number of fields and null-valued fields.
The DataSet API does not have a split operator and implementing this would
be much more difficult than one would expect. The problem is in the
optimizer
Hi,
it depends on the file format whether a file can be read in parallel or
not. Basically, you have to be able to identify valid offsets from which
you can start reading.
There are a few techniques like fixed sized blocks with padding or a footer
section with split offsets, but if the file is
Hi Paolo,
do you get the ClassNotFoundException for TheGlobalModel or for another
class?
Did you maybe forget to include SerializationUtils in the classpath?
Best, Fabian
2017-07-26 16:14 GMT+02:00 Paolo Cristofanelli <
cristofanelli.pa...@gmail.com>:
> Hi,
>
> I am trying to write and read in
Hi Chao,
Ad. 1 You could implement it with IterativeCondition. Sth like this:
Pattern pattern = Pattern.begin("first").where(new
SimpleCondition() {
@Override
public boolean filter(Event value) throws Exception {
return value.equals("A") || value.equals("B");
}
Hi
Is there a way to process mapping errors in Flink?
For example when string is valid double write in one table, otherwise in
another?
If not, what problems you see reffered to this opportunity and if I will
make PR, where I should start to implenent this feature?
I saw Tuple1, 2, etc. Many
15 matches
Mail list logo