Hi,
I just tried to compile 4.6.0 but got errors as follows:
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java:67:76:
'+' is not followed by whitespace.
Audit done.
[INFO] There are 235 checkstyle errors.
[ERROR] TraceServlet.java[0] (javadoc)
What version of HBase are you using with Phoenix? Does your query ever
finish -- is it log noise or does it lead to real timeouts?
On Mon, Sep 28, 2015 at 12:35 PM, Konstantinos Kougios <
kostas.koug...@googlemail.com> wrote:
> I've got a 500 mil rows table on a fairly mediocre cluster. I had
You're seeing checkstyle warnings. Feel free to disable this step with
-Dcheckstyle.skip.
If maven says BUILD SUCCESS then you're good.
On Tue, Jan 19, 2016 at 3:11 AM, Ascot Moss wrote:
> Hi,
>
> I just tried to compile 4.6.0 but got errors as follows:
>
>
Hi guys,
I'm doing my best to follow along with [0], but I'm hitting some stumbling
blocks. I'm running with HDP 2.3 for HBase and Spark. My phoenix build is
much newer, basically 4.6-branch + PHOENIX-2503, PHOENIX-2568. I'm using
pyspark for now.
I've added phoenix-$VERSION-client-spark.jar to
This likely has to do with hbase scanners running into lease expiration.
Try overriding the value of hbase.client.scanner.timeout.period in the
server side hbase-site.xml to a large value.
We have a feature coming out in Phoenix 4.7 (soon to be released) that will
take care of automatically
Hi Willem,
Let us know how we can help as you start getting into this, in particular
with your schema design based on your query requirements.
Thanks,
James
On Mon, Jan 18, 2016 at 8:50 AM, Pariksheet Barapatre
wrote:
> Hi Willem,
>
> Use Phoenix bulk load. I guess your
Sadly, it needs to be installed onto each Spark worker (for now). The
executor config tells each Spark worker to look for that file to add to its
classpath, so once you have it installed, you'll probably need to restart
all the Spark workers.
I co-locate Spark and HBase/Phoenix nodes, so I just
I'm using Spark on YARN, not spark stand-alone. YARN NodeManagers are
colocated with RegionServers; all the hosts have everything. There are no
spark workers to restart. You're sure it's not shipped by the YARN runtime?
On Tue, Jan 19, 2016 at 5:07 PM, Josh Mahonin wrote:
>
Right, this cluster I just tested on is HDP 2.3.4, so it's Spark on YARN as
well. I suppose the JAR is probably shipped by YARN, though I don't see any
logging saying it, so I'm not certain how the nuts and bolts of that work.
By explicitly setting the classpath, we're bypassing Spark's native JAR
Good day,
i use query :
> SELECT sequence_schema, sequence_name, start_with, increment_by, cache_size
> FROM SYSTEM."SEQUENCE";
>
> in phoenix.sh it can be done,
but in jdbc i get no response, i try other query that access non system
table and it can be done.
is that posible to access
On Tue, Jan 19, 2016 at 4:17 PM, Josh Mahonin wrote:
> What version of Spark are you using?
>
Probably HDP's Spark 1.4.1; that's what the jars in my install say, and the
welcome message in the pyspark console agrees.
Are there any other traces of exceptions anywhere?
>
No
11 matches
Mail list logo