|
1| US | 20150623 | 3000| 3421
|
1| UK | 20150623 | NULL| 1212
Thursday, June 25th @ 10am PST
On Tuesday, June 23, 2015, Ns G nsgns...@gmail.com wrote:
Hi James,
Can you specify the time zone please?
Thanks
Satya
On 23-Jun-2015 9:32 am, James Taylor jamestay...@apache.org
javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote:
If you're
This question is mostly a followup based on my earlier mail (below).
I’m re-consuming this data, one (5GB) csv file at a time.
I see that in consuming this file, there was one failed reduce task. In the
output, I see a stack trace that I’m guessing is related.
So, 2 questions:
1 – does this
* | CF.IMP*01* | CF.IMP*02*
|
1| US | 20150623 | 3000|
3421 |
1| UK | 20150623 | NULL|
1212
*
*ADID | COUNTRY | DAY ID* | CF.IMP*01* | CF.IMP*02* |
1| US | 20150623 | 3000|
3421 |
1| UK
| US | 20150623 | 3000| 3421
|
1| UK | 20150623 | NULL| 1212
|
Here, I have taken hour part from hour ID
| DAY ID* | CF.IMP*01* | CF.IMP*02*
|
1| US | 20150623 | 3000|
3421 |
1| UK | 20150623 | NULL
I did this installation
You should install Phoenix parcel from the Cloudera Manager like any other
parcel. And it's for Phoenix 4.3.1, 1.0 it's probably the version of the
cloudera's parcel.
, and it's for Phoenix 4.3.1.
On Tue, Jun 23, 2015 at 2:48 PM Serega Sheypak serega.shey...@gmail.com
Thanks for all the additional details.
The short answer (to both of your questions from your most-recent mail) is
that there shouldn't be any data loss, and that the failed reducer will
automatically be re-run by MapReduce. The full job is only successful (as
it was in this case) when all mappers
|
1| US | 20150623 | 3000| 3421
|
1| UK | 20150623 | NULL| 1212
Since lastGCTime is a dynamic column you need to specify the dynamic
column explicitly along with the table name.
Select lastGCTime FROM EventLog(lastGCTime TIME)
Select * will return only the regular columns (and not dynamic columns).
On Tue, Jun 23, 2015 at 12:09 AM, guxiaobo1982
The default column family name is 0. This is the string containing the
character representation of zero (or in other words, a single byte with
value 48).
And yes, it's possible to read Phoenix tables using the HBase API (although
it's of course a lot easier if you go via Phoenix).
- Gabriel
On
getting below error when using multitenant phoenix connection
is there a way to programmatically specify zoo.cfg and hbase-site.xml
properties (like *server.0=myhostname:2888:3888) *when initializing phoenix
connection?
adding hbase-site.xml and zoo.cfg to classpath doesnt help in this case,
it's running in standalone mode with hbase managing zk, I can connect to
hbase , I can also connect with phoenix jdbc client (single tenant
connection).
When I try using multitenant connection I'm able to connect with the first
tenant and write to hbase via phoenix jdbc connection,
the second
70 minutes sounds too high. You must be having very less number of
MapReduce slots or very less number of regions in your table. It should not
take 70 minutes for that job. I run that job on a 5TB table in around 2-5
min(table has around 1200 regions).
~Anil
On Tue, Jun 23, 2015 at 1:54 PM,
Hi, I'm testing dummy code:
int result = getJdbcFacade().createConnection().prepareStatement(upsert
into unique_site_visitor (visitorId, siteId, visitTs) values ('xxxyyyzzz',
1, 2)).executeUpdate();
LOG.debug(executeUpdate result: {}, result); //executeUpdate
result: 1
Hi Zack,
Would it be possible to provide a few more details on what kinds of
failures that you're getting, both with the CsvBulkLoadTool, and with the
SELECT COUNT(*) query?
About question #1, there aren't any known bugs (that I'm aware of) that
would cause some records to go missing in the
Hi Maryann,
EVENTS Table has id,article, and more columns. Id is the primay key
MAPPING Table has id,article,category columns. Id is the primay key
There is index on article column of both the tables.
Below is the query.
select count(MAPPING.article) as cnt,MAPPING.category from EVENTS
join
Hi Alex,
I dont have the authority to speak on behalf of Phoenix Committers.
However, in worst case, if your contribution is not accepted by Phoenix,
then also you can locally patch Phoenix and use it the way you want. Many
people use open source softwares like that.
HTH,
Anil
On Fri, Jun 19,
Hi James,
Can you specify the time zone please?
Thanks
Satya
On 23-Jun-2015 9:32 am, James Taylor jamestay...@apache.org wrote:
If you're interested in learning more about Phoenix, tune in this
Thursday @ 10am where I'll be talking about Phoenix in a free Webcast
hosted by O'Reilly:
Your second query should work with Phoenix 4.3 or later.
Thanks, unfortunately at the moment I’m stuck with Phoenix 4.2.
I will investigate the problem with the first one and get back to you.
Appreciate this.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
Michael,
You're correct, count distinct doesn't support multiple arguments currently
(I filed PHOENIX-2062 for this). Another workaround is to combine a.col1
and b.col2 into an expression, for example concatenating them. If order
matters, you could do this:
select count(distinct col1 || col2) ...
You can specify the zookeeper quorum in the connection string as
described here: https://phoenix.apache.org/#SQL_Support. All of the
hosts are expected to use the same port (which may be specified as
well).
On Tue, Jun 23, 2015 at 1:05 PM, Alex Kamil alex.ka...@gmail.com wrote:
it's running in
To add to what Gabriel said, you can also specify your own default
column family name with the DEFAULT_COLUMN_FAMILY property when you
create your table:
https://phoenix.apache.org/language/index.html#create_table
On Tue, Jun 23, 2015 at 11:07 AM, Gabriel Reid gabriel.r...@gmail.com wrote:
The
Oops, forgot the commit:
Connection conn = getJdbcFacade().createConnection();
int result = conn.prepareStatement(upsert into unique_site_visitor
(visitorId, siteId, visitTs) values ('xxxyyyzzz', 1,
2)).executeUpdate();
conn.commit();
LOG.debug(executeUpdate result: {}, result);
I think cloudera has a phoenix parcel available to download and use. You can
google “cloudera phoenix support” and you should find a instruction about how
to connect to a cloudera phoenix repo and install parcel etc.
Yanlin
On Jun 23, 2015, at 3:36 PM, Kevin Verhoeven
Which version of Phoenix are you using?
On Tuesday, June 23, 2015, Michael McAllister mmcallis...@homeaway.com
wrote:
Hi
(This questions relates to Phoenix 4.2 on HDP 2.2)
I have a situation where I want to count the distinct combination of a
couple of columns.
When I try the
Sorry, I missed the first line. Your second query should work with Phoenix
4.3 or later.
I will investigate the problem with the first one and get back to you.
Thanks,
Maryann
On Tuesday, June 23, 2015, Michael McAllister mmcallis...@homeaway.com
wrote:
Hi
(This questions relates to
Thanks Gabriel,
Can someone please give me detailed instructions for increasing the timeout?
I tried running Update Statistics and it failed with the exception below.
I am running the query from a region server node by CD’ing into
/user/hdp/2.2.0.0-2041/phoenix/bin and calling ./sqlline.py
Make sure you run the commit on the same connection from which you do
the upsert. Looks like you're opening a new connection with each
statement. Instead, open it once in the beginning and include the
commit like Samarth mentioned:
Connection conn = getJdbcFacade().createConnection();
int result
30 matches
Mail list logo