'), to_number("close"), to_number("high"),
to_number("low"), to_number("open"), "ticker", "stock", to_number("volume")
from "tsco" where "close" != '-' and "high" != '-
thanks Ankit
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own ris
Hi,
Has anyone managed to read phoenix table in Spark 2 by any chance please?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
take the row key to be primary key, i.e. unique
much like RDBMS (Oracle). Sounds like phoenix relies on that one when
creating table on top of Hbase table. Is this assessment correct please?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
mn-1, timestamp=1475856238280,
value=kzeuRUCqWYBKXcbPRSWMZLqPpsrLvgkOMLjDArtdJkoOlPGKZs
A4682060
column=cf1:column-1, timestamp=1475857115666,
value=MTXnucpYRxKbYSVmTVaFtPteWAtxZEUeTMXPntsVLIsMGDghcs
A54369308
column=cf1:column-1, timestamp=1475856238328,
value=HGY
using IDBC or Zeppelin on
Phoenix through JDBC with no problem
thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
Hi all,
Can someone provide me with a sample JDBC connection from Spark 2 to
Phoenix please?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
indexes must have the
hbase.regionserver.wal.codec property set to
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the
hbase-sites.xml of every region server. tableName=TICKER_INDEX
(state=0,code=-1)
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxia
Yes correct that is hbase-site.xml Ted.
i am running Hbase in standalone mode. Do I need region server?
thx
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
This works when the index is created in phoenix server as opposed to
phoenix client.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV
efault under storage tab. It will tell you what is stored.
Spark uses execution memory for result set on operation (RDD + DF) and
storage memory for anything cached with cache() or persist(). You can
verify all this in Spark UI.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin
M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table
undefined. tableName=marketDataHbase
Any ideas what causes it?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com
Hi,
I sorted this one out by dropping the row from SYSTEM catalog.
THanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any a
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any a
Hi,
I have a Hbase table that is populated via
org.apache.hadoop.hbase.mapreduce.ImportTsv
through bulk load ever 15 minutes. This works fine.
In Phoenix I created a view on this table
jdbc:phoenix:rhes564:2181> create index marketDataHbase_idx on
"marketDataHbase" ("price_info"."ticker", "price
ny append to Hbase, then it is pretty useless.
In my case data is inserted to Hbase table. I am just using Phoenix for
data queries (DQ) as opposed to inserts.
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPC
A workout I deployed was to rebuild the index immediately after bulk load
of data into Hbase table
ALTER INDEX MARKETDATAHBASE_IDX1 ON "marketDataHbase" REBUILD;
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV
AveragePrice
from marketdatahbase
) tmp
ORDER BY ticker, Date
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpre
Hi,
My queries in Phoenix pickup GMT timezone as default.
I need them to default to Europe/London by default
0: jdbc:phoenix:rhes564:2181> select CURRENT_DATE(),
CONVERT_TZ(CURRENT_DATE(), 'UTC', 'Europe/London');
+-+-+
| DATE '2016
Thanks Ankit,
couple of questions.
1. Will bulk load from Phoenix update the underlying Hbase table?
2. Do I need to replace Phoenix view on Hbase as with CREATE TABLE?
regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
You are correct Ankit.
However, I can use Spark SQL on Hbase table directly or Hive table built
on Hbase
regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
I tried putting it in "" etc but no joy I am afraid!
Dr Mich Talebzadeh
LinkedIn *
https://www.
rg.apache.hadoop.util.RunJar.main(RunJar.java:136)
I tried putting it inside "" etc but no joy I am afraid!
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd
Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar file?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
line I create Phoenix tables and columns on Hbase tables
UPPERCASE regardless of case of underlying Hbase table?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
re is Hbase. All Phoenix does
is to allow one to create SQL on top of Hbase to manipulate Hbase table
with DDL and DQ (data query). It does not store data itself.
I trust this is the correct assessment
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP
. I believe a Jira is with Hbase on this)
So we have a resilient design here. Phoenix secondary indexes are also
very useful.
BTW. After every new append can we run update statistics on Phoenix tables
and indexes as we do with Hive?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.c
Sorry I forgot you were referring to "multi tenancy"?
Can you please elaborate on this?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6Ac
datasources.LogicalRelation.(LogicalRelation.scala:40)
at
org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
at org.apache.spark.sql.SQLContext.load(SQLCon
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk.
CAN OVER MARKETDATAHBASE_IDX1 |
|* SERVER FILTER BY FIRST KEY ONLY*|
| SERVER AGGREGATE INTO SINGLE ROW |
++
HTH
Dr Mich Talebzadeh
LinkedIn *
ht
ts own
statistics
3. Is statistics collected similar to statistics for store-index in Hive
ORC table
4. Can statistics been used in predicate push down
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.lin
M, Fawaz Enaya wrote:
>
> Thanks for your answer but why it gives 1 way parallel and can not be more?
>
> On Sunday, 30 October 2016, Mich Talebzadeh
> wrote:
>
>> If you create a secondary index in Phoenix on the table on single or
>> selected columns, that index (whic
Thanks Sergey for clarification.
Regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disc
does by supporting a batch of transactions with commit/rollback etc.
Has anyone had the experience of using Phoenix on Hbase for transactional
compliance please?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
on add-ons or some beta test tools such as Phoenix
with combination of some other product.
Regards,
Mich
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
the option of org.apache.phoenix.mapreduce.CsvBulkLoadTool to
load data at command level into Hbase via Phoenix skin.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
CompactionThroughputTuner Period: 6 Unit:
MILLISECONDS], [ScheduledChore: Nam
e: CompactedHFilesCleaner Period: 12 Unit: MILLISECONDS],
[ScheduledChore: Name: MovedRegionsCleaner for region
rhes75,16020,1528317910703 Period: 12 Unit: MILLISECONDS],
[ScheduledChore: Name: MemstoreFlush
revered back to stable release Hbase 1.2.6 unless someone has resolved
this issue.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
yes correct I am using Hbase on hdfs with hadoop-2.7.3
The file system is ext4.
I was hoping that I can avoid the sync option,
many thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/v
-r
baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
>From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using
/home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
Dr Mich Talebzadeh
Li
75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUr
using HBASE-1.2.6 which is the stable version and it connects
successfully to Hbase-2. This appears to be a working solution for now.
Regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
connection.
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:455)
Any thoughts?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/pr
ionManager$HConnectionImplementation.(ConnectionManager.java:648)
... 33 more
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV
and Phoenix.
The issue I am facing is using both
org.apache.phoenix.mapreduce.CsvBulkLoadTool and hbase.mapreduce.ImportTsv
utilities.
So I presume the issue may be to do with both these command line tools not
working with Hadoop 3.1?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https
Hi,
I was wondering if anyone has a handy script to reverse engineer an
existing table schema.
I guess one can get the info from system.catalog table to start with.
However, I was wondering if there is a shell script already or I have to
write my own.
Thanks,
Dr Mich Talebzadeh
LinkedIn
SE!
Some documentation in below link
https://phoenix.apache.org/bulk_dataload.html
talks about using backslash with table name etc but that does not work/
Any ideas how to bulk load into table created in mixed case!
Regards,
Mich
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.c
case
table yourself. any shell should do I believe.
Thanks,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpre
Thanks.
Done,
https://issues.apache.org/jira/browse/PHOENIX-5900
<https://issues.apache.org/jira/browse/PHOENIX-5900>Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profil
51 matches
Mail list logo