s).
> - do flush and major_compaction on SYSTEM.CATALOG
> - when you don't see those columns and open connection at currentSCN=9 and
> alter table to add both the columns.
> - you may set keep_deleted_cells back to true in SYSTEM.CATALOG
>
> Regards,
> Ankit Singhal
>
&
use problems, as we base the need to upgrade on the time
> stamp of the system catalog table.
> Thanks,
> James
>
>
> On Tuesday, April 26, 2016, Arun Kumaran Sabtharishi <arun1...@gmail.com>
> wrote:
>
>> Hi Ankit,
>>
>> Just following with the question t
BASE_COLUMN_COUNT too so that the dependent features
> work correctly.(Remember use correct INTEGER byte representation for
> DATA_TYPE column).
>
> And, can you also please share output of
> > scan 'SYSTEM.SEQUENCE'
>
> Regards,
> Ankit
>
> On Fri, Apr 22, 2016 at 9:14
rentSCN=9
>> > ALTER TABLE SYSTEM.CATALOG ADD BASE_COLUMN_COUNT INTEGER,
>> IS_ROW_TIMESTAMP BOOLEAN;
>> >!quit
>>
>> Quit the shell and start new session without CurrentSCN.
>> > ./sqlline.py localhost
>> > !describe system.catalog
>>
>> this s
; IS_ROW_TIMESTAMP BOOLEAN;
> >!quit
>
> Quit the shell and start new session without CurrentSCN.
> > ./sqlline.py localhost
> > !describe system.catalog
>
> this should resolve the issue of missing column.
>
> Regards,
> Ankit Singhal
>
>
> On Fri
; It's ok if you can just post after grep for CATALOG in a command output (scan
> 'SYSTEM.CATALOG', {RAW=>true}).
>
> On Wed, Apr 20, 2016 at 10:07 PM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
>
>> One more question to add,
>> Do we need to have 1000
\x00defau
lt
Thanks,
Arun
On Wed, Apr 20, 2016 at 11:31 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> James,
>
> Table SYSTEM.CATALOG is ENABLED
> SYSTEM.CATALOG, {TABLE_ATTRIBUTES => {coprocessor$1 =>
> '|org.apache.phoenix.coprocessor.ScanRegion
n Wed, Apr 20, 2016 at 11:19 AM, James Taylor <jamestay...@apache.org>
wrote:
> Arun,
> Please run the command Ankit mentioned in an HBase shell and post the
> output back here.
> Thanks,
> James
>
>
> On Wednesday, April 20, 2016, Arun Kumaran Sabtharishi &
.6 timestamp , and which stopping upgrade
> code to add a new column.
>
> scan 'SYSTEM.CATALOG', {RAW=>true}
>
>
>
> Regards,
> Ankit Singhal
>
> On Wed, Apr 20, 2016 at 4:25 AM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
>
>> A
before 4.6 upgrade?"
We do see that clearCache() is being called for 4.7, and 4.7 upgrades from
ConnectionQueryServicesImpl class, but not for 4.6
Thanks,
Arun
On Tue, Apr 19, 2016 at 10:22 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> James,
>
> To add
James,
To add more information on this issue, this happens in new phoenix views
associated with brand new tables as well. So, this cannot be an
upgrade/migration issue. Not figured out a specific way to reproduce this
issue yet. Could you throw some ideas on what direction this problem could
be
To add details to the original problem that was mentioned in this email, we
migrated to Phoenix-4.6.1 very recently and this problem started occurring
only after that.
1. Checking SYSTEM.CATALOG for some older phoenix views in the same
environment, some of the *phoenix views did not have the
Thanks, James.
But, I do not see Phoenix using Hbase's BulkDeleteProtocol. Does this mean
phoenix deletes rows one by one in linear time?
Thanks,
Arun
After trying to dig through and debug the phoenix source code several
hours, could not find the one place where the actual phoenix delete
happens. Kindly point me where does the delete starts in the phoenix core
and the place where the actual delete happens.
Note: I have checked classes like
Hi Phoenix users and Developers,
Is phoenix uses BulkDeleteProtocol in HBase or does it deletes the rows one
at a time? Kindly, direct me to the appropriate class in the phoenix source
code as well.
Thanks,
Arun
Hello all,
While restarting the HBase servers after upgrading from Apache phoenix
4.5.1-HBase-1.0 to 4.6.0-HBase-1.0, the master server start fails due to
the following exception.
2015-12-16 14:11:32,330 FATAL org.apache.hadoop.hbase.master.HMaster:
Failed to become active master
James,
Do you see any issues in using the delete statement below as a workaround
for dropping views until the JIRA's are fixed and released?
delete from SYSTEM.CATALOG where table_name = 'MY_VIEW'
Thanks,
Arun
James,
We dug deeper and found that the time is spent in the
MetaDataEndPointImpl.findChildViews() method. It runs a scan on the
SYSTEM.CATALOG table looking for the link record. Since the link record is
in the format CHILD-PARENT, it has to scan the entire table to find the
parent suffix.
In
James,
Filed the bugs.
https://issues.apache.org/jira/browse/PHOENIX-2050
https://issues.apache.org/jira/browse/PHOENIX-2051
Thanks,
Arun
Hello Phoenix/Hbase users and developers,
I have a few questions regarding how table table delete works in hbase.
*What I know:*
If a hbase table is deleted(after disabling), the SYSTEM.CATALOG entries
related to that table will be deleted.
If a view is created using phoenix (assuming there are
Hello James,
Thanks for the reply. Here are the answers for the questions you have asked.
*1.) What's different between the two environments (i.e. the working andnot
working ones)?*
The not working ones has more number of views than the working ones.
*2.) Do you mean 1.3M views or 1.3M
Hello phoenix users and developers,
After upgrading to phoenix 4.3.1, dropping a view times-out(views with huge
data and also no/less data).
Before upgrading to 4.3.1 client, the system was using 4.0.0 incubating
client which had a similar issue where dropping a view took much time. But
at some
James,
Thanks for your reply. It worked!
But, can you help me understand how does it make a difference even when
there is no data in the system.sequence table.
Thanks,
Arun
Hello Phoenix users and developers,
Recently upgraded to the phoenix to 4.3.1 and the following things are hazy.
1. What is the right way to use the phoenix jdbc client?
2. Since there is no phoenix-4.3.1-client.jar in the maven repository, is
it wise to use the jar as a library within the
It is from both code and sqlline.
On Jun 1, 2015 5:52 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Is this from code, or sqlline?
On Fri, May 29, 2015 at 2:42 PM, Arun Kumaran Sabtharishi
arun1...@gmail.com wrote:
Upgraded Apache-phoenix-4.1.0 to 4.3.1. When connecting phoenix client
To add phoenix 4.3.1 dependency in the project and building, phoenix-4.3.1
client jars are not in maven central repository. Which is the right place
to request to add the jars in the repository? Or is this the right place?
Thanks,
Arun
Hello,
1. Currently using phoenix 4.0.0 incubating for both client and server.
2. Upgraded to 4.3.1(most recent)
3. While trying to connect using the client in command line (using
./sqlline.py) the connection could not be success throwing the following
error.
1)
*Error: ERROR 1013 (42M04): Table
27 matches
Mail list logo