y:ticker, timestamp=1475447365118, value=TSCO
Tesco PLC
column=stock_daily:tradedate, timestamp=1475447365118, value= 3-Jan-06
Tesco PLC
column=stock_daily:volume, timestamp=1475447365118, value=46935045
1 row(s) in 0.0390 seconds
Is this because the hbase_row_key --> Tesco PLC is the same for
76, value=24877341
TSCO-1-Apr-09
column=stock_info:stock, timestamp=1475507091676, value=TESCO PLC
TSCO-1-Apr-09
column=stock_info:ticker, timestamp=1475507091676, value=TSCO
What do you think?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.co
ey and whether it is necessary
to store those above columns?
regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://tal
-+
2 rows selected (0.011 seconds)
However, I don't seem to be able to use where clause!
0: jdbc:phoenix:rhes564:2181> select "Date","volume" from "tsco" where
"Date" = "1-Apr-08";
Error: ERROR 504
ected (0.016 seconds)
BTW I believe double quotes in enclosing phoenix column names are needed
for case sensitivity on Hbase?
Also does Phoenix have type conversion from VARCHAR to integer etc? Is
there such document
Regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/pr
.
I believe HBase is now on its 11th anniversary (the 10th anniversary was
May 2020) and hope HBase will go from strength to strength and we will
keep using it for years to come with these frequent upgrades.
view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-520
55e2-63f1-4def-b625-e73f0ac36271
column=price_info:timecreated, timestamp=1476133232864,
value=2016-10-10T17:12:22
1 row(s) in 0.0100 seconds
So how can I get the other columns?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOA
Hbase table.
Also I like tools like Zeppelin that work with both SQL and Spark
Functional programming.
Sounds like reading data from Hbase table is best done through some form of
SQL.
What are view on this approach?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
Thanks I am on Spark 2 so may not be feasible.
As a mater of interest how about using Hive on top of Hbase table?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
I have already done it with Hive and Phoenix thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpre
|2016-10-16T18:44:57| S18|74.10128|
|2016-10-16T18:44:57| S07|66.13622|
|2016-10-16T18:44:57| S20| 60.35727|
+---+--++
only showing top 10 rows
Is this a workable solution?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAE
Thanks Ted.
I have seen that before, but sounds like breaking a nut with sledgehammer.
It should be a simpler than that.
Regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
lease of Hive is going to have in-memory database
(LLAP) so we can cache Hive tables in memory. That will be faster. Many
people underestimate Hive but I still believe it has a lot to offer besides
serious ANSI compliant SQL.
Regards
Mich
Dr Mich Talebzadeh
LinkedIn *
https:/
directory is mapped to Hive external table
4. There is Hive managed table with added optimisation/indexing (ORC)
There are a number of ways of doing it as usual.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2g
test environment.
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk.
rImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 42 more
Any ideas what can cause this?
Thanks
Sorted this one out
Need to put phoenix-4.8.0-HBase-0.98-client.jar in $HBASE_HOME/lib directory
though it does say anything about Phoenix
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/prof
ry.java:105)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:635)
... 22 more
Thanks
Thanks Ted
hbase-1.2.3 worked!
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use
(although it has Cost Base
Optimizer), how Hbase fares, beyond relying on these engines
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
Sorry that should read Hive not Spark here
Say compared to Spark that is basically a SQL layer relying on different
engines (mr, Tez, Spark) to execute the code
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
Hi,
Can someone in a nutshell explain *the *Hbase use of log-structured
merge-tree (LSM-tree) as data storage architecture
The idea of merging smaller files to larger files periodically to reduce
disk seeks, is this similar concept to compaction in HDFS or Hive?
Thanks
Dr Mich Talebzadeh
-tree
appreciate any comments
cheers
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer
thanks
bq. all updates are done in memory o disk access
I meant data updates are operated in memory, no disk access.
in other much like rdbms read data into memory and update it there
(assuming that data is not already in memory?)
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com
uot;,
"orc.stripe.size"="16777216",
"orc.row.index.stride"="1" )
;
--show create table marketData;
--Populate target table
INSERT OVERWRITE TABLE marketData PARTITION (DateStamp = "${TODAY}")
SELECT
KEY
agreed much like any rdbms
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it a
BTW. I always understood that Hbase is append only. is that generally true?
thx
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
uot;orc.compress"="SNAPPY",
"transactional"="true",
"orc.create.index"="true",
"orc.bloom.filter.columns"="object_id",
"orc.bloom.filter.fpp"="0.05",
"orc.stripe.size"="268435456",
"orc.r
/OVERWRITE
into this table using Spark as the execution engine for Hive (as opposed to
map-reduce) it should pretty fast.
Hive is going to get an in-memory database in the next release or so it is
a perfect choice.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
Hi,
I have a Hbase table that is populated via
org.apache.hadoop.hbase.mapreduce.ImportTsv
through bulk load ever 15 minutes. This works fine.
In Phoenix I created a view on this table
jdbc:phoenix:rhes564:2181> create index marketDataHbase_idx on
"marketDataHbase" ("price_info"."ticker", "price
manipulation. LSM tree
structure is pretty impressive compared to the traditional B-tree access in
RDBMS.
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV
Looks like it lost the connection to Spark cluster.
What mode you are using with Spark, Standalone, Yarn or others. The issue
looks like a resource manager issue.
I have seen this when running Zeppelin with Spark on Hbase.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile
licas.java:210)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin
sorry do you mean in my error case the issue was locating regions during
scan.
in that case I do not know why it works through spark shell but not
Zeppelin?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
Hbase does not have indexes but Phoenix will allow one to create secondary
indexes on Hbase. The index structure will be created on Hbase itself and
you can maintain it from Phoenix.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
thanks Ted I am aware of that issue of Spark 2.0.1 not handling
connections to Phoenix. For now I use Spark 2.0.1 on Hbase directly or
Spark 2.0.1 on Hbase through Hive external tables.
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
Gentle reminder :)
[image: Inline images 1]
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disc
ok what it says that it was discussed before and there is Jira on hbase
side.
it is not a showstopper anyway
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
'hbase:meta' at
region=hbase:meta,,1.1588230740, hostname=rhes564,16201,1477246132044,
seqNum=0
Is this related to Hbase region server?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedi
Hi Felix,
Yes it is the same host that I run Spark shell and I start Zeppelin on.
Have you observed this before?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
s to avoid accidentally
dropping Hbase table etc. Is this a reasonable approach?
Then that Hive table can be used by a variety of tools like Spark, Tableau,
Zeppelin.
Is this a viable solution as Hive seems to be preferred on top of Hbase
compared to Phoenix etc.
Thaks
Dr Mich Talebzadeh
Link
thanks John.
How about using Phoenix or using Spark RDDs on top of Hbase?
Many people think Phoenix is not a good choice?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
thanks Gunnar.
have you tried the performance of this product on Hbase. There are a number
of options available. However, what makes this product better than hive on
hbase?
regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Thanks John for info.
Cheers
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use
transaction will be rolled back in its entirety.
Now how does Hbase can handle this? Specifically at the theoretical level
if a standard transactional processing was migrated from RDBMS to Hbase
tables, will that work.
Has anyone built successful transaction processing in Hbase?
Thanks
Dr Mich
family columns, will it work ok?
thanks
1.
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disc
Hi,
Storing XML file in Big Data. Are there any strategies to create multiple
column families or just one column family and in that case how many columns
would be optional?
thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Thanks Richard.
How would one decide on the number of column family and columns?
Is there a ballpark approach
Cheers
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
Thanks Ted.
How does Phoenix provide transaction support?
I have read some docs but sounds like problematic. I need to be sure there
is full commit and rollback if things go wrong!
Also it appears that Phoenix transactional support is in beta phase.
Cheers
Dr Mich Talebzadeh
LinkedIn
. However, some use case fit would
be very valuable.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpre
on add-ons or some beta test tools such as Phoenix
with combination of some other product.
Regards,
Mich
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
+ me
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it at your own risk. Any a
Hi Asher,
As mentioned before Spark 2 does not work with Phoenix. However, you can
use Spark 2 on top of Phoenix directly.
Does that answer your point?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
table but only EXTERNAL tables are supported. I was wondering the pros and
cons of using Hive or Phoenix tables on Hbase?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
lue()).toString,
Bytes.toString( iter.next().getValue()).toString,
Bytes.toString(iter.next().getValue())
)}
The above reads the column family columns sequentially. How can I force it
to read specific columns only?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.
CREATE TABLE MARKETDATAHBASE (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER
VARCHAR, PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);
HTH,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/v
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
***
I cannot seem to be able to fix this even after removing hbase directory
from hdfs and zookeeper! Any ideas will be appreciated.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
CompactionThroughputTuner Period: 6 Unit:
MILLISECONDS], [ScheduledChore: Nam
e: CompactedHFilesCleaner Period: 12 Unit: MILLISECONDS],
[ScheduledChore: Name: MovedRegionsCleaner for region
rhes75,16020,1528317910703 Period: 12 Unit: MILLISECONDS],
[ScheduledChore: Name: MemstoreFlush
revered back to stable release Hbase 1.2.6 unless someone has resolved
this issue.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
yes correct I am using Hbase on hdfs with hadoop-2.7.3
The file system is ext4.
I was hoping that I can avoid the sync option,
many thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/v
ION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE =>
'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
If I create the table in default namespace (i.e. without any namespace
name) it works!
Thanks
Dr Mich Ta
-r
baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
>From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using
/home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
Dr Mich Talebzadeh
Li
75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUr
using HBASE-1.2.6 which is the stable version and it connects
successfully to Hbase-2. This appears to be a working solution for now.
Regards
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
2ab6 05a7
0x040: b605 a82a b605 a9b6 05aa 2ab6 05ab b605
0x050: ac2a b605 adb6 05ae 572a b605 af9a 000a
0x060: 2ab6 05b0 9900 0c2b 2ab8 0410 b605 b157
0x070: 2bb6 05b2 b0
Stackmap Table:
same_frame(@6)
append_frame(@103,Object[#294
Thanks Ted.
I downloaded the latest Hbase binary which is 2.0.1 2018/06/19
Is there any trunc version build for Hadoop 3.1 please and if so where can
I download it?
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
so what options do I have her?. Is there any conf parameter I can set in
hbase-site,xml to make this work? or shall I go back to a more stable
version of Hbase?
cheers
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
Thanks
In your point below
…. or you can change default WAL to FSHLog.
is there any configuration parameter to allow me to do so in hbase-site.xml?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.
One way would be to set WAL outside of Hadoop environment. Will that work?
The following did not work
hbase.wal.provider
multiwal
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
Thanks Ted.
Went back to hbase-1.2.6 that works OK with Hadoop 3.1
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
Hi,
What is the ETA with version of Hbase that will work with Hadoop 3.1 and
may not require HA setup for HDFS?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
always been very knowledgeable and helpful in
the forum and being an engineer myself, I would not think Ted's suggestion
was far off.
Kind Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/prof
works with Hadoop 3.1
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:* Use it a
s/5091/running-a-mapreduce-job-fails-file-does-not-exist?rq=1
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://taleb
connection.
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:455)
Any thoughts?
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/pr
Hi,
I have an Hbase table 'trading:MARKETDATAHBASEBATCH'
Kafka delivers topic rows into flume.
This is a typical json row
f2d7174e-6299-49a7-9e87-0d66c248e66b
{"rowkey":"f2d7174e-6299-49a7-9e87-0d66c248e66b","ticker":"BP",
"timeissued":"2020-02-14T08:54:13", "price":573.25}
The rowkey is UUID
damages arising from
such loss, damage or destruction.
On Fri, 14 Feb 2020 at 12:27, Pedro Boado wrote:
> If what you're looking after is not achievable by extracting fields through
> regex (it looks like it should) and you are after full control over what's
> written to HBa
act your own
> .
>
> https://github.com/slmnhq/flume/blob/master/flume-ng-sinks/flume-ng-hbase-sink/src/main/java/org/apache/flume/sink/hbase/SimpleHbaseEventSerializer.java#L40
>
> I'd say you should go for RegexHbaseEventRowKeySerializer.
>
>
>
> On Fri, 14 Feb 2020
Hi,
I have streaming Kafka that sends data to flume in the following JSON format
This is the record is sent via Kafka
7d645a0f-0386-4405-8af1-7fca908fe928
{"rowkey":"7d645a0f-0386-4405-8af1-7fca908fe928","ticker":"IBM",
"timeissued":"2020-02-14T20:32:29", "price":140.11}
Note that "7d645a0f-038
plays the key alright value=f8a6e006-35bb-4470-9a7b-9273b8aa83f1
But cannot search on that key!
hbase(main):333:0> get 'trading:MARKETDATAHBASEBATCH',
'f8a6e006-35bb-4470-9a7b-9273b8aa83f1'
COLUMN CELL
0 row(s) in 0.05
uot;
PRICE_INFO:timeissued
timestamp=1581883743642, value= "timeissued":"2020-02-16T20:19:43"
PRICE_INFO:timestamp
timestamp=1581883743642, value=1581883739646
PRICE_INFO:topic
timestamp=1581883743642, value=md
7 row(s) in 0.0040 seconds
Hope this helps
Dr Mich
Hi,
Thanks.
Does this version of Hbase work with Hadoop 3.1? I am still stuck with
Hbase 1.2.7
Hadoop 3.1.0
Source code repository https://github.com/apache/hadoop -r
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with protoc 2.5.0
Regards,
Dr Mich
ad Hadoop 2.8.5 client for Hbase use?
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Discla
l running
For usage try 'help "drop"'
Took 668.2910 seconds
So the table cannot be dropped.
All fun and games
Cheers,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id
Thanks Nick.
My bad. One of the nodes holding region server did not have Snappy package
installed.
yum install snappy snappy-devel did the trick
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.
nt is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.
On Mon, 17 Feb 2020 at 22:27, Mich Talebzadeh
wrote:
> I stripped everything from the jar list. This is all I have
>
> sspark-shell --jars shc-core-1.
keep using it for years to come!
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
*Disclaimer:
Hi,
Back in 2017, I wrote an article in LinkedIn on HBase titled HBase for
impatient
<https://www.linkedin.com/pulse/apache-hbase-impatient-mich-talebzadeh-ph-d-/>
Today I wrote a post in celebration of HBase 10 years anniversary in
LinkedIn.
HTH,
Dr Mich Talebzadeh
[image: ima
Many thanks. I thought HBase deserves a 10 candle virtual cake!
regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
Hi,
I will be presenting on Hbase to one of the major European banks this
Friday 15th May.
Does anyone have latest bullet points on new features of HBase so I can add
them to my presentation material.
Many thanks,
Dr Mich Talebzadeh
[image: image.png]
LinkedIn *
https
Hi,
Thank you for the proposals.
I am afraid I have to agree to differ. The term master and slave (commonly
used in Big data tools (not confined to HBase only) is BAU and historical)
and bears no resemblance to anything recent.
Additionally, both whitelist and blacklist simply refer to a proposa
n.
On Mon, 22 Jun 2020 at 20:14, Mich Talebzadeh
wrote:
>
> Hi,
>
> Thank you for the proposals.
>
> I am afraid I have to agree to differ. The term master and slave (commonly
> used in Big data tools (not confined to HBase only) is BAU and historical)
> and bears no res
Let us look at what *slave* mean
According to the merriam-webster
https://www.merriam-webster.com/dictionary/slave
Definition of *slave*
(Entry 1 of 4)
1: a person held in servitude as the chattel of another
2: one that is completely subservient to a dominating influence
3: a device (such as t
93 matches
Mail list logo