[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-19 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541-rev3.patch

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, 
 trunk-HBASE-10541-rev2.patch, trunk-HBASE-10541-rev3.patch, 
 trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-19 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13906405#comment-13906405
 ] 

Vasu Mariyala commented on HBASE-10541:
---

Thanks [~stack] for the review.

The patch is intended to work for the file systems which honor the properties 
passed to it using the method

{code}
  public FSDataOutputStream create(Path file, FsPermission permission,
  boolean overwrite, int bufferSize,
  short replication, long blockSize, Progressable progress)
{code}

Some of the file system implementations like FTPFileSystem, S3FileSystem don't 
honor the parameters that don't make sense to them. In this example, they don't 
honor the replication. WebHdfsFileSystem honors all these parameters.

Based on going through the code, the existing BLOCKSIZE of the 
HColumnDescriptor indicates the size of the Blocks in HFile (Data block, meta 
block) while the FS_BLOCKSIZE is the block size used by the file system while 
storing this HFile. Please correct me.

Added more documentation to the rev3.patch

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, 
 trunk-HBASE-10541-rev2.patch, trunk-HBASE-10541-rev3.patch, 
 trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10541:
-

 Summary: Make the file system properties customizable per 
table/column family
 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541.patch

The file system properties like replication (the number of nodes to which the 
hfile needs to be replicated), block size need to be customizable per 
table/column family. This is important especially in the testing scenarios or 
for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541.patch

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Patch Available  (was: Open)

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13901860#comment-13901860
 ] 

Vasu Mariyala commented on HBASE-10541:
---

FSUtils.create method doesn't honor the permissions passed to it if the file 
system is a DistributedFileSystem. It uses FsPermission.getDefault(). Not sure 
if this is intended. But fixing this issue too in this patch.

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541-rev1.patch

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Patch Available  (was: Open)

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Open  (was: Patch Available)

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541-rev2.patch

Thanks [~yuzhih...@gmail.com] for the review.  Uploaded the rev2 patch which 
addresses your comments.

 Make the file system properties customizable per table/column family
 

 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541-rev1.patch, 
 trunk-HBASE-10541-rev2.patch, trunk-HBASE-10541.patch


 The file system properties like replication (the number of nodes to which the 
 hfile needs to be replicated), block size need to be customizable per 
 table/column family. This is important especially in the testing scenarios or 
 for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10546:
-

 Summary: Two scanner objects are open for each hbase map task but 
only one scanner object is closed
 Key: HBASE-10546
 URL: https://issues.apache.org/jira/browse/HBASE-10546
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala


Map reduce framework calls createRecordReader of the 
TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
this method, we are initializing the TableRecordReaderImpl (restart method). 
This initializes the scanner object. After this, map reduce framework calls 
initialize on the RecordReader. In our case, this calls restart of the 
TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
end of the task, only the second scanner object is closed. Because of this, the 
smallest read point of HRegion is affected.

We don't need to initialize the RecordReader in the createRecordReader method 
and we need to close the scanner object in the restart method. (incase if the 
restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10546:
--

Attachment: 0.94-HBASE-10546.patch

 Two scanner objects are open for each hbase map task but only one scanner 
 object is closed
 --

 Key: HBASE-10546
 URL: https://issues.apache.org/jira/browse/HBASE-10546
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
 Attachments: 0.94-HBASE-10546.patch


 Map reduce framework calls createRecordReader of the 
 TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
 this method, we are initializing the TableRecordReaderImpl (restart method). 
 This initializes the scanner object. After this, map reduce framework calls 
 initialize on the RecordReader. In our case, this calls restart of the 
 TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
 end of the task, only the second scanner object is closed. Because of this, 
 the smallest read point of HRegion is affected.
 We don't need to initialize the RecordReader in the createRecordReader method 
 and we need to close the scanner object in the restart method. (incase if the 
 restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10546:
--

Attachment: trunk-HBASE-10546.patch

 Two scanner objects are open for each hbase map task but only one scanner 
 object is closed
 --

 Key: HBASE-10546
 URL: https://issues.apache.org/jira/browse/HBASE-10546
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
 Attachments: 0.94-HBASE-10546.patch, trunk-HBASE-10546.patch


 Map reduce framework calls createRecordReader of the 
 TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
 this method, we are initializing the TableRecordReaderImpl (restart method). 
 This initializes the scanner object. After this, map reduce framework calls 
 initialize on the RecordReader. In our case, this calls restart of the 
 TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
 end of the task, only the second scanner object is closed. Because of this, 
 the smallest read point of HRegion is affected.
 We don't need to initialize the RecordReader in the createRecordReader method 
 and we need to close the scanner object in the restart method. (incase if the 
 restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-12 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899618#comment-13899618
 ] 

Vasu Mariyala commented on HBASE-10505:
---

[~lhofhansl] May be this is one of the issues of HBASE-10416 ?

I think we need to fix the TestImportExport.testWithFilter test case by 
inserting atleast another row. This works currently because we insert only one 
row (row1), perform export, perform import with prefix filter. The prefix 
filter filterRowKey is never called (before this fix) so all the rows are 
included in the import. But since we only have one row (row1), the test case 
doesn't really test the functionality of import with filter.

 Import.filterKv does not call Filter.filterRowKey
 -

 Key: HBASE-10505
 URL: https://issues.apache.org/jira/browse/HBASE-10505
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.96.2, 0.94.17

 Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt


 The general contract of a Filter is that filterRowKey is called before 
 filterKeyValue.
 Import is using Filters for custom filtering but it does not called 
 filterRowKey at all. That throws off some Filters (such as RowFilter, and 
 more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
 HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10505) Import.filterKv does not call Filter.filterRowKey

2014-02-12 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899746#comment-13899746
 ] 

Vasu Mariyala commented on HBASE-10505:
---

For branches  0.96, the issue is fixed with HBASE-10416

 Import.filterKv does not call Filter.filterRowKey
 -

 Key: HBASE-10505
 URL: https://issues.apache.org/jira/browse/HBASE-10505
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.96.2, 0.94.17

 Attachments: 10505-0.94-v2.txt, 10505-0.94.txt, 10505-0.96-v2.txt


 The general contract of a Filter is that filterRowKey is called before 
 filterKeyValue.
 Import is using Filters for custom filtering but it does not called 
 filterRowKey at all. That throws off some Filters (such as RowFilter, and 
 more recently PrefixFilter, and InclusiveStopFilter). See HBASE-10493 and 
 HBASE-10485.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9218) HBase shell does not allow to change/assign custom table-column families attributes

2014-02-06 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13893690#comment-13893690
 ] 

Vasu Mariyala commented on HBASE-9218:
--

[~lhofhansl] Started making the changes. Will upload the patch soon.

 HBase shell does not allow to change/assign custom table-column families 
 attributes
 ---

 Key: HBASE-9218
 URL: https://issues.apache.org/jira/browse/HBASE-9218
 Project: HBase
  Issue Type: Bug
  Components: shell, Usability
Affects Versions: 0.94.6.1
Reporter: Vladimir Rodionov
Priority: Critical
 Fix For: 0.94.17


 HBase shell. In 0.94.6.1 the attempt to assign/change custom table or CF 
 attribute does not throw any exception but has no affect. The same code works 
 fine in Java API (on HTableDescriptor or HColumnDescriptor)
 This is short shell session snippet:
 {code}
 hbase(main):009:0 disable 'T'
 0 row(s) in 18.0730 seconds
 hbase(main):010:0 alter 'T', NAME = 'df', 'FAKE' = '10'
 Updating all regions with the new schema...
 5/5 regions updated.
 Done.
 0 row(s) in 2.2900 seconds
 hbase(main):011:0 enable 'T'
 0 row(s) in 18.7140 seconds
 hbase(main):012:0 describe 'T'
 DESCRIPTION   
  ENABLED
  {NAME = 'T', FAMILIES = [{NAME = 'df', DATA_BLOCK_ENCODING = 'NONE', 
 BLOOMFILTER = true
   'NONE', REPLICATION_SCOPE = '0', VERSIONS = '1', COMPRESSION = 'GZ', 
 MIN_VERSIONS = '0', TTL = '2147483647', K
  EEP_DELETED_CELLS = 'false', BLOCKSIZE = '65536', IN_MEMORY = 'true', 
 ENCODE_ON_DISK = 'true', BLOCKCACHE = 'tru
  e'}]}
 {code}
 As you can see, the new attribute 'FAKE' has not been added to column family 
 'cf'.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10470) Import generating huge log file while importing large amounts of data

2014-02-05 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10470:
-

 Summary: Import generating huge log file while importing large 
amounts of data
 Key: HBASE-10470
 URL: https://issues.apache.org/jira/browse/HBASE-10470
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Critical
 Attachments: 0.94-HBASE-10470.patch

Import mapper has System.out.println statements for each key value if there is 
filter associated with the import. This is generating huge log file while 
importing large amounts of data. These statements must be changed to trace and 
log4j must be used for logging.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10470) Import generating huge log file while importing large amounts of data

2014-02-05 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10470:
--

Attachment: 0.94-HBASE-10470.patch

 Import generating huge log file while importing large amounts of data
 -

 Key: HBASE-10470
 URL: https://issues.apache.org/jira/browse/HBASE-10470
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Critical
 Attachments: 0.94-HBASE-10470.patch


 Import mapper has System.out.println statements for each key value if there 
 is filter associated with the import. This is generating huge log file while 
 importing large amounts of data. These statements must be changed to trace 
 and log4j must be used for logging.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10470) Import generating huge log file while importing large amounts of data

2014-02-05 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10470:
--

Attachment: trunk-HBASE-10470.patch

 Import generating huge log file while importing large amounts of data
 -

 Key: HBASE-10470
 URL: https://issues.apache.org/jira/browse/HBASE-10470
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Critical
 Attachments: 0.94-HBASE-10470.patch, trunk-HBASE-10470.patch


 Import mapper has System.out.println statements for each key value if there 
 is filter associated with the import. This is generating huge log file while 
 importing large amounts of data. These statements must be changed to trace 
 and log4j must be used for logging.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10470) Import generating huge log file while importing large amounts of data

2014-02-05 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10470:
--

Status: Patch Available  (was: Open)

 Import generating huge log file while importing large amounts of data
 -

 Key: HBASE-10470
 URL: https://issues.apache.org/jira/browse/HBASE-10470
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Critical
 Attachments: 0.94-HBASE-10470.patch, trunk-HBASE-10470.patch


 Import mapper has System.out.println statements for each key value if there 
 is filter associated with the import. This is generating huge log file while 
 importing large amounts of data. These statements must be changed to trace 
 and log4j must be used for logging.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10416) Improvements to the import flow

2014-01-28 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10416:
--

Release Note: 
Import with this fix supports 

a) Filtering of the row using the Filter#filterRowKey(byte[] buffer, int 
offset, int length).

b) Accepts durability parameter (Ex: -Dimport.wal.durability=SKIP_WAL ) while 
importing the data into HBase. If the data doesn't need to be replicated to the 
DR cluster or if the same import job would be run on the dr cluster, consider 
using SKIP_WAL durability for performance.

 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter which filter the data in filterRowKey method 
 rather than the filterKeyValue method). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10416) Improvements to the import flow

2014-01-28 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10416:
--

Attachment: HBASE-10416-rev1.patch

 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416-rev1.patch, HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter which filter the data in filterRowKey method 
 rather than the filterKeyValue method). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10416) Improvements to the import flow

2014-01-28 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13884491#comment-13884491
 ] 

Vasu Mariyala commented on HBASE-10416:
---

Sorry for the delay and thanks for the review comments

[~yuzhih...@gmail.com]

1. Felt that constructing a filter object from filter class and filter args 
would be utility method and would be useful when extending the import utility 
for specific customizations. 
2. Fixed the long line  the java doc warnings.
3. Updated the release note description in the jira.

[~ndimiduk]

Saw your other issues related to making things like mapper or reducer 
configurable and reuse the code. Would you mind discussing on these issues when 
you are free. You can ping me in gmail.

 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416-rev1.patch, HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter which filter the data in filterRowKey method 
 rather than the filterKeyValue method). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10416) Improvements to the import flow

2014-01-24 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10416:
-

 Summary: Improvements to the import flow
 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala


Following improvements can be made to the Import logic

a) Make the import extensible (i.e., remove the filter from being a static 
member of Import and make it an instance variable of the mapper, make the 
mappers or variables of interest protected. )

b) Make sure that the Import calls filterRowKey method of the filter (Useful if 
we want to filter the data of an organization based on the row key or using 
filters like PrefixFilter). The existing test case in 
TestImportExport#testWithFilter works with this assumption but is so far 
successful because there is only one row inserted into the table.

c) Provide an option to specify the durability during the import (Specifying 
the Durability as SKIP_WAL would improve the performance of restore 
considerably.) [~lhofhansl] suggested that this should be a parameter to the 
import.

d) Some minor refactoring to avoid building a comma separated string for the 
filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10416) Improvements to the import flow

2014-01-24 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10416:
--

Attachment: HBASE-10416.patch

Attaching the patch for the above mentioned issues.

 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10416) Improvements to the import flow

2014-01-24 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10416:
--

Status: Patch Available  (was: Open)

 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10416) Improvements to the import flow

2014-01-24 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10416:
--

Description: 
Following improvements can be made to the Import logic

a) Make the import extensible (i.e., remove the filter from being a static 
member of Import and make it an instance variable of the mapper, make the 
mappers or variables of interest protected. )

b) Make sure that the Import calls filterRowKey method of the filter (Useful if 
we want to filter the data of an organization based on the row key or using 
filters like PrefixFilter which filter the data in filterRowKey method rather 
than the filterKeyValue method). The existing test case in 
TestImportExport#testWithFilter works with this assumption but is so far 
successful because there is only one row inserted into the table.

c) Provide an option to specify the durability during the import (Specifying 
the Durability as SKIP_WAL would improve the performance of restore 
considerably.) [~lhofhansl] suggested that this should be a parameter to the 
import.

d) Some minor refactoring to avoid building a comma separated string for the 
filter args.

  was:
Following improvements can be made to the Import logic

a) Make the import extensible (i.e., remove the filter from being a static 
member of Import and make it an instance variable of the mapper, make the 
mappers or variables of interest protected. )

b) Make sure that the Import calls filterRowKey method of the filter (Useful if 
we want to filter the data of an organization based on the row key or using 
filters like PrefixFilter). The existing test case in 
TestImportExport#testWithFilter works with this assumption but is so far 
successful because there is only one row inserted into the table.

c) Provide an option to specify the durability during the import (Specifying 
the Durability as SKIP_WAL would improve the performance of restore 
considerably.) [~lhofhansl] suggested that this should be a parameter to the 
import.

d) Some minor refactoring to avoid building a comma separated string for the 
filter args.


 Improvements to the import flow
 ---

 Key: HBASE-10416
 URL: https://issues.apache.org/jira/browse/HBASE-10416
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Reporter: Vasu Mariyala
 Attachments: HBASE-10416.patch


 Following improvements can be made to the Import logic
 a) Make the import extensible (i.e., remove the filter from being a static 
 member of Import and make it an instance variable of the mapper, make the 
 mappers or variables of interest protected. )
 b) Make sure that the Import calls filterRowKey method of the filter (Useful 
 if we want to filter the data of an organization based on the row key or 
 using filters like PrefixFilter which filter the data in filterRowKey method 
 rather than the filterKeyValue method). The existing test case in 
 TestImportExport#testWithFilter works with this assumption but is so far 
 successful because there is only one row inserted into the table.
 c) Provide an option to specify the durability during the import (Specifying 
 the Durability as SKIP_WAL would improve the performance of restore 
 considerably.) [~lhofhansl] suggested that this should be a parameter to the 
 import.
 d) Some minor refactoring to avoid building a comma separated string for the 
 filter args.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10317) getClientPort method of MiniZooKeeperCluster does not always return the correct value

2014-01-10 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10317:
-

 Summary: getClientPort method of MiniZooKeeperCluster does not 
always return the correct value
 Key: HBASE-10317
 URL: https://issues.apache.org/jira/browse/HBASE-10317
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Minor


{code}
//Starting 5 zk servers
MiniZooKeeperCluster cluster = hbt.startMiniZKCluster(5);
int defaultClientPort = 21818;
cluster.setDefaultClientPort(defaultClientPort);
cluster.killCurrentActiveZooKeeperServer();
cluster.getClientPort(); //Still returns the port of the zk server that was 
killed in the previous step
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10317) getClientPort method of MiniZooKeeperCluster does not always return the correct value

2014-01-10 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10317:
--

Attachment: HBASE-10317.patch

 getClientPort method of MiniZooKeeperCluster does not always return the 
 correct value
 -

 Key: HBASE-10317
 URL: https://issues.apache.org/jira/browse/HBASE-10317
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Minor
 Attachments: HBASE-10317.patch


 {code}
 //Starting 5 zk servers
 MiniZooKeeperCluster cluster = hbt.startMiniZKCluster(5);
 int defaultClientPort = 21818;
 cluster.setDefaultClientPort(defaultClientPort);
 cluster.killCurrentActiveZooKeeperServer();
 cluster.getClientPort(); //Still returns the port of the zk server that was 
 killed in the previous step
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10317) getClientPort method of MiniZooKeeperCluster does not always return the correct value

2014-01-10 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10317:
--

Status: Patch Available  (was: Open)

 getClientPort method of MiniZooKeeperCluster does not always return the 
 correct value
 -

 Key: HBASE-10317
 URL: https://issues.apache.org/jira/browse/HBASE-10317
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala
Priority: Minor
 Attachments: HBASE-10317.patch


 {code}
 //Starting 5 zk servers
 MiniZooKeeperCluster cluster = hbt.startMiniZKCluster(5);
 int defaultClientPort = 21818;
 cluster.setDefaultClientPort(defaultClientPort);
 cluster.killCurrentActiveZooKeeperServer();
 cluster.getClientPort(); //Still returns the port of the zk server that was 
 killed in the previous step
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10067) Filters are not applied if columns are added to the scanner

2013-12-02 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837227#comment-13837227
 ] 

Vasu Mariyala commented on HBASE-10067:
---

Only when the filter columns are part of the scan column list, the filter is 
applied. See HBASE-4364 for more information.

 Filters are not applied if columns are added to the scanner
 ---

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner the filtering  does not happen and entire 
 scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4200) blockCache summary - web UI

2013-10-02 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784253#comment-13784253
 ] 

Vasu Mariyala commented on HBASE-4200:
--

Is this issue still valid? Do we want to show the information like Table name, 
column family, total number of blocks, heap size under the region server page?

 blockCache summary - web UI
 ---

 Key: HBASE-4200
 URL: https://issues.apache.org/jira/browse/HBASE-4200
 Project: HBase
  Issue Type: Sub-task
Reporter: Doug Meil
Priority: Minor

 Web UI for block cache summary report.
 This will most likely be a new web-page linked from the RegionServer detail 
 page.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-4364) Filters applied to columns not in the selected column list are ignored

2013-09-18 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13771037#comment-13771037
 ] 

Vasu Mariyala commented on HBASE-4364:
--

Thanks [~yuzhih...@gmail.com] for your comments. Before doing the performance 
benchmarking, I wanted to get some comments on the approach. I would work on it 
and post the observations. 

Sure, I would post the next patch on the review board.

 Filters applied to columns not in the selected column list are ignored
 --

 Key: HBASE-4364
 URL: https://issues.apache.org/jira/browse/HBASE-4364
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.90.4, 0.92.0, 0.94.0
Reporter: Todd Lipcon
Priority: Critical
 Attachments: 
 HBASE-4364-failing-test-with-simplest-custom-filter.patch, HBASE-4364.patch, 
 hbase-4364_trunk.patch, hbase-4364_trunk-v2.patch


 For a scan, if you select some set of columns using addColumns(), and then 
 apply a SingleColumnValueFilter that restricts the results based on some 
 other columns which aren't selected, then those filter conditions are ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4364) Filters applied to columns not in the selected column list are ignored

2013-09-17 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-4364:
-

Attachment: HBASE-4364.patch

I have attached the patch HBASE-4364.patch which provides the support for the 
filtering to be based on different column qualifiers/column families when 
compared to the selection list (scan.getFamilyMap()). This feature reduces the 
amount of data that is sent to the client in different scenarios.

On a high level, the patch does the following

a) Filter communicates its needs of the column qualifiers/column families 
through getFamilyMap() method of Filter. This method has been added in this 
patch.

b) While selecting the store scanners, the requirement from the filters is 
considered as well.

c) ScanQueryMatcher has been changed to apply the filter only if it is part of 
the required column qualifiers specified by the filter. It has also been 
changed to include a key value only if it is part of the scan family map.

d) The change was implemented in a back ward compatible manner. The behavior of 
the existing filters would not be affected.

Please review the first version of the patch and provide your valuable inputs 
on the approach that was used.

If the approach is ok, then I will continue to work on refining the code and 
changing the existing filters like SingleColumnValueFilter to use the new 
approach.

+ [~lhofhansl] to comment on this

 Filters applied to columns not in the selected column list are ignored
 --

 Key: HBASE-4364
 URL: https://issues.apache.org/jira/browse/HBASE-4364
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.90.4, 0.92.0, 0.94.0
Reporter: Todd Lipcon
Priority: Critical
 Attachments: 
 HBASE-4364-failing-test-with-simplest-custom-filter.patch, HBASE-4364.patch, 
 hbase-4364_trunk.patch, hbase-4364_trunk-v2.patch


 For a scan, if you select some set of columns using addColumns(), and then 
 apply a SingleColumnValueFilter that restricts the results based on some 
 other columns which aren't selected, then those filter conditions are ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762419#comment-13762419
 ] 

Vasu Mariyala commented on HBASE-8930:
--

{code}
for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
  throw new AssertionError(Column family prefix used twice:  +
  metricName);
}
{code}

The above code throws an error when the metric name starts with cf.cf.. It 
would be helpful if any one sheds some light on the reason behind checking for 
cf.cf.

The scenarios in which we would have a metric name start with cf.cf. are as 
follows (See generateSchemaMetricsPrefix method of SchemaMetrics)

   a) The column family name should be cf
 
   AND

   b) The table name is either  or use table name globally should be false 
(useTableNameGlobally variable of SchemaMetrics).
  Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
column family as . So we could rule out the
  possibility of the table name being empty.

Also to note, the variables useTableNameGlobally and 
tableAndFamilyToMetrics of SchemaMetrics are static and are shared across all 
the tests that run in a single jvm. In our case, the profile runAllTests has 
the below configuration 

{code}
surefire.firstPartForkModeonce/surefire.firstPartForkMode
surefire.firstPartParallelnone/surefire.firstPartParallel
surefire.firstPartThreadCount1/surefire.firstPartThreadCount

surefire.firstPartGroupsorg.apache.hadoop.hbase.SmallTests/surefire.firstPartGroups
{code}

Hence all of our small tests run in a single jvm and share the above variables 
useTableNameGlobally and tableAndFamilyToMetrics. 

The reasons why the order of execution of the tests caused this failure are as 
follows

a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
useTableNameGlobally to false. But these tests don't create tables that have 
the column family name as cf.

b) If the tests in step (a) run before the tests which create table/regions 
with column family 'cf', metric names would start with cf.cf.

c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema metrics, 
they would fail as the metric names start with cf.cf.

On my local machine, I have tried to re-create the failure scenario by changing 
the sure fire test configuration and creating a simple (TestSimple) which just 
creates a region for the table 'testtable' and column family 'cf'.

{code}

TestSimple.java
--
  @Before
  public void setUp() throws Exception {
HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
TEST_UTIL.getConfiguration(), htd);

Put put = new Put(ROW_BYTES);
for (int i = 0; i  10; i += 2) {
  // puts 0, 2, 4, 6 and 8
  put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
  Bytes.toBytes(VALUE_PREFIX + i));
}
this.region.put(put);
this.region.flushcache();
  }

  @Test
  public void testFilterInvocation() throws Exception {
System.out.println(testing);
  }

  @After
  public void tearDown() throws Exception {
HLog hlog = region.getLog();
region.close();
hlog.closeAndDelete();
  }

Successful run:

---
 T E S T S
---
2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info from 
SCDynamicStore
Running org.apache.hadoop.hbase.filter.TestSimple
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.618 sec
Running org.apache.hadoop.hbase.regionserver.TestMemStore
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.542 sec

Results :

Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
--

Failed run order:

---
 T E S T S
---
2013-09-09 15:43:21.466 java[46890:db03] Unable to load realm mapping info from 
SCDynamicStore
Running 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.96-trunk-Independent-Test-Execution.patch
0.94-Independent-Test-Execution.patch

Attached the changes to ensure small tests run in a separate jvm similar to the 
large or medium tests.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-Independent-Test-Execution.patch, 0.95-HBASE-8930.patch, 
 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762510#comment-13762510
 ] 

Vasu Mariyala commented on HBASE-8930:
--

As I mentioned, the test case which causes the metrics to be created for the 
column family cf is TestInvocationRecordFilter. Prior to this, I didn't see 
any test case under SmallTests category which causes the metrics to be created 
for column family named cf.

The following are the solutions

a) Change the column family name in the TestInvocationRecordFilter to myCF

b) Change the category of the tests which change the static variables to medium 
category (Lars suggestion)

c) Change the tests to run in different jvm's. (3:05.855s vs 5:09.840s on my 
local machine). With -PlocalTests, it would always run the small tests in a 
different jvm.

In the context of this jira, can we just follow approach a and raise a 
different jira to iron out the issues caused by the execution of the tests in a 
single jvm.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-Independent-Test-Execution.patch, 0.95-HBASE-8930.patch, 
 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.96-HBASE-8930-rev3.patch
0.94-HBASE-8930-rev2.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev6.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762541#comment-13762541
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Attached the patch HBASE-8930-rev6.patch (trunk), 0.96-HBASE-8930-rev3.patch 
(0.96) and 0.94-HBASE-8930-rev2.patch (0.94) which have the column family name 
changed from cf to mycf.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Status: Patch Available  (was: Reopened)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930-rev6.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev6.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Status: Open  (was: Patch Available)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Status: Patch Available  (was: Open)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.13, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new 

[jira] [Created] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-9490:


 Summary: Provide independent execution environment for small tests
 Key: HBASE-9490
 URL: https://issues.apache.org/jira/browse/HBASE-9490
 Project: HBase
  Issue Type: Improvement
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala


Some of the state related to schema metrics is stored in static variables and 
since the small test cases are run in a single jvm, it is causing random 
behavior in the output of the tests.

An example scenario is the test case failures in HBASE-8930

{code}

for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
  throw new AssertionError(Column family prefix used twice:  +
  metricName);
}

{code}

The above code throws an error when the metric name starts with cf.cf.. It 
would be helpful if any one sheds some light on the reason behind checking for 
cf.cf.

The scenarios in which we would have a metric name start with cf.cf. are as 
follows (See generateSchemaMetricsPrefix method of SchemaMetrics)

a) The column family name should be cf

AND

b) The table name is either  or use table name globally should be false 
(useTableNameGlobally variable of SchemaMetrics).
Table name is empty only in the case of ALL_SCHEMA_METRICS which has the column 
family as . So we could rule out the
possibility of the table name being empty.

Also to note, the variables useTableNameGlobally and 
tableAndFamilyToMetrics of SchemaMetrics are static and are shared across all 
the tests that run in a single jvm. In our case, the profile runAllTests has 
the below configuration

{code}
surefire.firstPartForkModeonce/surefire.firstPartForkMode
surefire.firstPartParallelnone/surefire.firstPartParallel
surefire.firstPartThreadCount1/surefire.firstPartThreadCount
  
surefire.firstPartGroupsorg.apache.hadoop.hbase.SmallTests/surefire.firstPartGroups

{code}

Hence all of our small tests run in a single jvm and share the above variables 
useTableNameGlobally and tableAndFamilyToMetrics.

The reasons why the order of execution of the tests caused this failure are as 
follows

a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
useTableNameGlobally to false. But these tests don't create tables that have 
the column family name as cf.

b) If the tests in step (a) run before the tests which create table/regions 
with column family 'cf', metric names would start with cf.cf.

c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema metrics, 
they would fail as the metric names start with cf.cf.

On my local machine, I have tried to re-create the failure scenario by changing 
the sure fire test configuration and creating a simple (TestSimple) which just 
creates a region for the table 'testtable' and column family 'cf'.

{code}
TestSimple.java
--
  @Before
  public void setUp() throws Exception {
HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
TEST_UTIL.getConfiguration(), htd);

Put put = new Put(ROW_BYTES);
for (int i = 0; i  10; i += 2) {
  // puts 0, 2, 4, 6 and 8
  put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
  Bytes.toBytes(VALUE_PREFIX + i));
}
this.region.put(put);
this.region.flushcache();
  }

  @Test
  public void testFilterInvocation() throws Exception {
System.out.println(testing);
  }

  @After
  public void tearDown() throws Exception {
HLog hlog = region.getLog();
region.close();
hlog.closeAndDelete();
  }

Successful run:

---
 T E S T S
---
2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info from 
SCDynamicStore
Running org.apache.hadoop.hbase.filter.TestSimple
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.618 sec
Running org.apache.hadoop.hbase.regionserver.TestMemStore
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.542 sec

Results :

Tests 

[jira] [Commented] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762742#comment-13762742
 ] 

Vasu Mariyala commented on HBASE-9490:
--

The following are the solutions

a) Change the category of the tests which change the static variables to medium 
category ([~lhofhansl] suggestion)

b) Change the tests to run in different jvm's. (3:05.855s vs 5:09.840s on my 
local machine). Currently with -PlocalTests, it always runs the small tests in 
a different jvm. So the time increase would mostly be in the build machine.

 Provide independent execution environment for small tests
 -

 Key: HBASE-9490
 URL: https://issues.apache.org/jira/browse/HBASE-9490
 Project: HBase
  Issue Type: Improvement
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala

 Some of the state related to schema metrics is stored in static variables and 
 since the small test cases are run in a single jvm, it is causing random 
 behavior in the output of the tests.
 An example scenario is the test case failures in HBASE-8930
 {code}
 for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
 if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
   throw new AssertionError(Column family prefix used twice:  +
   metricName);
 }
 {code}
 The above code throws an error when the metric name starts with cf.cf.. It 
 would be helpful if any one sheds some light on the reason behind checking 
 for cf.cf.
 The scenarios in which we would have a metric name start with cf.cf. are as 
 follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
 a) The column family name should be cf
 AND
 b) The table name is either  or use table name globally should be false 
 (useTableNameGlobally variable of SchemaMetrics).
 Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
 column family as . So we could rule out the
 possibility of the table name being empty.
 Also to note, the variables useTableNameGlobally and 
 tableAndFamilyToMetrics of SchemaMetrics are static and are shared across 
 all the tests that run in a single jvm. In our case, the profile runAllTests 
 has the below configuration
 {code}
 surefire.firstPartForkModeonce/surefire.firstPartForkMode
 surefire.firstPartParallelnone/surefire.firstPartParallel
 surefire.firstPartThreadCount1/surefire.firstPartThreadCount
   
 surefire.firstPartGroupsorg.apache.hadoop.hbase.SmallTests/surefire.firstPartGroups
 {code}
 Hence all of our small tests run in a single jvm and share the above 
 variables useTableNameGlobally and tableAndFamilyToMetrics.
 The reasons why the order of execution of the tests caused this failure are 
 as follows
 a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
 useTableNameGlobally to false. But these tests don't create tables that have 
 the column family name as cf.
 b) If the tests in step (a) run before the tests which create table/regions 
 with column family 'cf', metric names would start with cf.cf.
 c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
 TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
 metrics, they would fail as the metric names start with cf.cf.
 On my local machine, I have tried to re-create the failure scenario by 
 changing the sure fire test configuration and creating a simple (TestSimple) 
 which just creates a region for the table 'testtable' and column family 'cf'.
 {code}
 TestSimple.java
 --
   @Before
   public void setUp() throws Exception {
 HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
 htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
 HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
 this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
 TEST_UTIL.getConfiguration(), htd);
 Put put = new Put(ROW_BYTES);
 for (int i = 0; i  10; i += 2) {
   // puts 0, 2, 4, 6 and 8
   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
   Bytes.toBytes(VALUE_PREFIX + i));
 }
 this.region.put(put);
 this.region.flushcache();
   }
   @Test
   public void testFilterInvocation() throws Exception {
 System.out.println(testing);
   }
   @After
   public void tearDown() throws Exception {
 HLog hlog = region.getLog();
 region.close();
 hlog.closeAndDelete();
   }
 Successful run:
 ---
  T E S T S
 ---
 2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info 
 from SCDynamicStore
 Running 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762748#comment-13762748
 ] 

Vasu Mariyala commented on HBASE-8930:
--

No, what ever has been checked in is correct. I just re-attached the patch rev6 
to let hadoop qa run the precommit again as the test failures reported in the 
earlier email are not related to this patch.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, 

[jira] [Updated] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9490:
-

Attachment: 0.96-trunk-Independent-Test-Execution.patch
0.94-Independent-Test-Execution.patch

 Provide independent execution environment for small tests
 -

 Key: HBASE-9490
 URL: https://issues.apache.org/jira/browse/HBASE-9490
 Project: HBase
  Issue Type: Improvement
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala
 Attachments: 0.94-Independent-Test-Execution.patch, 
 0.96-trunk-Independent-Test-Execution.patch


 Some of the state related to schema metrics is stored in static variables and 
 since the small test cases are run in a single jvm, it is causing random 
 behavior in the output of the tests.
 An example scenario is the test case failures in HBASE-8930
 {code}
 for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
 if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
   throw new AssertionError(Column family prefix used twice:  +
   metricName);
 }
 {code}
 The above code throws an error when the metric name starts with cf.cf.. It 
 would be helpful if any one sheds some light on the reason behind checking 
 for cf.cf.
 The scenarios in which we would have a metric name start with cf.cf. are as 
 follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
 a) The column family name should be cf
 AND
 b) The table name is either  or use table name globally should be false 
 (useTableNameGlobally variable of SchemaMetrics).
 Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
 column family as . So we could rule out the
 possibility of the table name being empty.
 Also to note, the variables useTableNameGlobally and 
 tableAndFamilyToMetrics of SchemaMetrics are static and are shared across 
 all the tests that run in a single jvm. In our case, the profile runAllTests 
 has the below configuration
 {code}
 surefire.firstPartForkModeonce/surefire.firstPartForkMode
 surefire.firstPartParallelnone/surefire.firstPartParallel
 surefire.firstPartThreadCount1/surefire.firstPartThreadCount
   
 surefire.firstPartGroupsorg.apache.hadoop.hbase.SmallTests/surefire.firstPartGroups
 {code}
 Hence all of our small tests run in a single jvm and share the above 
 variables useTableNameGlobally and tableAndFamilyToMetrics.
 The reasons why the order of execution of the tests caused this failure are 
 as follows
 a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
 useTableNameGlobally to false. But these tests don't create tables that have 
 the column family name as cf.
 b) If the tests in step (a) run before the tests which create table/regions 
 with column family 'cf', metric names would start with cf.cf.
 c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
 TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
 metrics, they would fail as the metric names start with cf.cf.
 On my local machine, I have tried to re-create the failure scenario by 
 changing the sure fire test configuration and creating a simple (TestSimple) 
 which just creates a region for the table 'testtable' and column family 'cf'.
 {code}
 TestSimple.java
 --
   @Before
   public void setUp() throws Exception {
 HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
 htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
 HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
 this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
 TEST_UTIL.getConfiguration(), htd);
 Put put = new Put(ROW_BYTES);
 for (int i = 0; i  10; i += 2) {
   // puts 0, 2, 4, 6 and 8
   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
   Bytes.toBytes(VALUE_PREFIX + i));
 }
 this.region.put(put);
 this.region.flushcache();
   }
   @Test
   public void testFilterInvocation() throws Exception {
 System.out.println(testing);
   }
   @After
   public void tearDown() throws Exception {
 HLog hlog = region.getLog();
 region.close();
 hlog.closeAndDelete();
   }
 Successful run:
 ---
  T E S T S
 ---
 2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info 
 from SCDynamicStore
 Running org.apache.hadoop.hbase.filter.TestSimple
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
 Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 

[jira] [Commented] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-06 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13760446#comment-13760446
 ] 

Vasu Mariyala commented on HBASE-9301:
--

Yes, I would post the patch soon.

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-9301.patch, HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-06 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Attachment: 0.94-HBASE-9301-rev1.patch

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-9301.patch, 0.94-HBASE-9301-rev1.patch, 
 HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-06 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Attachment: HBASE-9301-rev1.patch

Attached the patch files 0.94-HBASE-9301-rev1.patch and HBASE-9301-rev1.patch. 
The behavior is hbase.dynamic.jars.dir defaults to .lib in 0.94 and to lib in 
0.96 with the migration from 0.94 to 0.96 taken care by the NamespaceUpgrade.

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-9301.patch, 0.94-HBASE-9301-rev1.patch, 
 HBASE-9301.patch, HBASE-9301-rev1.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-06 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Attachment: HBASE-9301-rev2.patch

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-9301.patch, 0.94-HBASE-9301-rev1.patch, 
 HBASE-9301.patch, HBASE-9301-rev1.patch, HBASE-9301-rev2.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-05 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13759754#comment-13759754
 ] 

Vasu Mariyala commented on HBASE-9301:
--

[~enis] or [~lhofhansl] Can you please point me to the code/help me in 
understanding the issues that could arise because of rootdir/lib in 0.94? I am 
looking into HBaseFsck. 

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
Assignee: Vasu Mariyala
 Attachments: 0.94-HBASE-9301.patch, HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-04 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev5.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-04 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13758342#comment-13758342
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Attached the newer versions HBASE-8930-rev5.patch (top of trunk) and 
0.96-HBASE-8930-rev2.patch (top of 0.96) which contain the changes described in 
the previous comments.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-04 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.96-HBASE-8930-rev2.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 

[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-04 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Attachment: 0.94-HBASE-9301.patch

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
 Attachments: 0.94-HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-04 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Attachment: HBASE-9301.patch

Attached the patch files for 0.94 (0.94-HBASE-9301.patch) and trunk/0.96 
(HBASE-9301.patch). The patch defaults hbase.dynamic.jars.dir to 
hbase.rootdir/lib

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
Reporter: James Taylor
 Attachments: 0.94-HBASE-9301.patch, HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-04 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9301:
-

Status: Patch Available  (was: Open)

 Default hbase.dynamic.jars.dir to hbase.rootdir/jars
 

 Key: HBASE-9301
 URL: https://issues.apache.org/jira/browse/HBASE-9301
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.95.2, 0.98.0, 0.94.12, 0.96.0
Reporter: James Taylor
 Attachments: 0.94-HBASE-9301.patch, HBASE-9301.patch


 A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
 so that folks aren't forced to edit their hbase-sites.xml to take advantage 
 of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.95-HBASE-8930.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.95-HBASE-8930.patch, 
 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.94-HBASE-8930.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.95-HBASE-8930.patch, 
 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev3.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.95-HBASE-8930.patch, 
 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch, HBASE-8930-rev3.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757055#comment-13757055
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Attached 0.94-HBASE-8930.patch, 0.95-HBASE-8930.patch and HBASE-8930-rev3.patch 
(top of trunk). There is another version of the patch for trunk due to the 
recent changes to expand the usage of Cell interface instead of KeyValue. A 
different patch for 0.95 has been attached as the patch from trunk adds 
ColumnTracker to the rejection list due to the presence of a line in the 
checkColumn method signature in trunk while the same is missing in 0.95. These 
patches are completely similar.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.95-HBASE-8930.patch, 
 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch, HBASE-8930-rev3.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.94-HBASE-8930-rev1.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: 0.95-HBASE-8930-rev1.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 8930-0.94.txt, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, 
 HBASE-8930-rev3.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev4.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 8930-0.94.txt, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, 
 HBASE-8930-rev3.patch, HBASE-8930-rev4.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757243#comment-13757243
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Attached 0.94-HBASE-8930-rev1.patch, 0.95-HBASE-8930-rev1.patch and 
HBASE-8930-rev4.patch (top of trunk) after discussion with Lars. The change is 
to do a lazy comparison in the ScanWildcardColumnTracker (always return 
MatchCode.INCLUDE in the checkColumn method of ScanWildcardColumnTracker and do 
all the processing in checkVersions method).

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 8930-0.94.txt, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, 
 HBASE-8930-rev3.patch, HBASE-8930-rev4.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 

[jira] [Created] (HBASE-9427) Copy constructor of ImmutableBytesWritable needs to consider the offset

2013-09-03 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-9427:


 Summary: Copy constructor of ImmutableBytesWritable needs to 
consider the offset
 Key: HBASE-9427
 URL: https://issues.apache.org/jira/browse/HBASE-9427
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala


A simple test below
{code}
byte[] bytes = {'a','b','c','d','e','f'};
ImmutableBytesWritable writable1 = new ImmutableBytesWritable(bytes, 1, 
bytes.length);
ImmutableBytesWritable writable2 = new ImmutableBytesWritable(writable1);
Assert.assertTrue(Mismatch, writable1.equals(writable2));
{code}
would fail with AssertionFailedError.

The reason for this is 

{code}
  public ImmutableBytesWritable(final ImmutableBytesWritable ibw) {
this(ibw.get(), 0, ibw.getSize());
  }
{code}

the constructor would always assume 0 as the offset while it can get it from 
ibw.getOffset() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9427) Copy constructor of ImmutableBytesWritable needs to consider the offset

2013-09-03 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757290#comment-13757290
 ] 

Vasu Mariyala commented on HBASE-9427:
--

This issue is a duplicate of HBASE-8781 and i was looking at 0.94 code base. 
This can be closed as duplicate.

 Copy constructor of ImmutableBytesWritable needs to consider the offset
 ---

 Key: HBASE-9427
 URL: https://issues.apache.org/jira/browse/HBASE-9427
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala

 A simple test below
 {code}
 byte[] bytes = {'a','b','c','d','e','f'};
 ImmutableBytesWritable writable1 = new ImmutableBytesWritable(bytes, 1, 
 bytes.length);
 ImmutableBytesWritable writable2 = new ImmutableBytesWritable(writable1);
 Assert.assertTrue(Mismatch, writable1.equals(writable2));
 {code}
 would fail with AssertionFailedError.
 The reason for this is 
 {code}
   public ImmutableBytesWritable(final ImmutableBytesWritable ibw) {
 this(ibw.get(), 0, ibw.getSize());
   }
 {code}
 the constructor would always assume 0 as the offset while it can get it from 
 ibw.getOffset() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8781) ImmutableBytesWritable constructor with another IBW as param need to consider the offset of the passed IBW

2013-09-03 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757291#comment-13757291
 ] 

Vasu Mariyala commented on HBASE-8781:
--

[~lhofhansl] Should this be fixed in 0.94?

 ImmutableBytesWritable constructor with another IBW as param need to consider 
 the offset of the passed IBW
 --

 Key: HBASE-8781
 URL: https://issues.apache.org/jira/browse/HBASE-8781
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.8
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Fix For: 0.98.0

 Attachments: HBASE-8781.patch


 {code}
 /**
* Set the new ImmutableBytesWritable to the contents of the passed
* codeibw/code.
* @param ibw the value to set this ImmutableBytesWritable to.
*/
   public ImmutableBytesWritable(final ImmutableBytesWritable ibw) {
 this(ibw.get(), 0, ibw.getSize());
   }
 {code}
 It should be this(ibw.get(), ibw.getOffset(), ibw.getSize());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930-rev4.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 8930-0.94.txt, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, 
 HBASE-8930-rev3.patch, HBASE-8930-rev4.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-03 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev4.patch

Reattaching the patch to let the hadoop qa run.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 8930-0.94.txt, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, 
 HBASE-8930-rev3.patch, HBASE-8930-rev4.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, 

[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-26 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13750289#comment-13750289
 ] 

Vasu Mariyala commented on HBASE-7709:
--

The patch HBASE-7709-rev5.patch is on the top of 0.94 and hence the hadoop qa 
would always fail while applying this patch on trunk. Can any one please run 
the hadoop qa build for the patch 0.95-trunk-rev4.patch (which is the trunk 
and 0.95 patch)?

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-26 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: (was: 0.95-trunk-rev4.patch)

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, HBASE-7709.patch, 
 HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, 
 HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-26 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: 0.95-trunk-rev4.patch

Attaching the patch again

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-26 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13750617#comment-13750617
 ] 

Vasu Mariyala commented on HBASE-7709:
--

The release audit warnings are not related to the patch. This has to do with 
the missing licenses in the below files. After correcting the license info in 
these files, the release audit is successful

{code}
***

Unapproved licenses:

  
/home/vmariyala/bigdata-dev/testhbase/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css
  
/home/vmariyala/bigdata-dev/testhbase/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css

***
{code}

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-25 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: 0.95-trunk-rev4.patch

Re-attaching the patch version 4 so that hadoop qa can run

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-25 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: (was: 0.95-trunk-rev4.patch)

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9331) FuzzyRow filter getNextForFuzzyRule not working properly for special case

2013-08-25 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13749833#comment-13749833
 ] 

Vasu Mariyala commented on HBASE-9331:
--

[~yuzhih...@gmail.com] Does FuzzyRowFilter have the below assumptions?

a) All the row keys are off the same length because the next for a valid fuzzy 
row match (key={0,1,1,1}) can always be a zero appended byte array 
(key={0,1,1,1,0}). If this assumption is always true, this jira could no longer 
be an issue.

b) The length of the row key must always be greater than or equal to the length 
of fuzzy row key or fuzzy info.

{code}
Assert.assertEquals(FuzzyRowFilter.SatisfiesCode.YES,
FuzzyRowFilter.satisfies(new byte[]{1, 0, 1}, // row to check only 3 
bytes
 new byte[]{1, 0, 1, 0, 1}, // fuzzy row 
contains 5 bytes
 new byte[]{0, 1, 0, 0, 0})); // mask contains 
5 bytes

The intention is to extract the rows off the pattern 1?101. But a row 101 
satisfies the pattern and is included in the results.
{code}

c) Does fuzzy row key need to have '0's even in the places where the fuzzy info 
contains a 1 (indicates a non-fixed byte)

{code}
assertNext(
new byte[]{1, 1, 1, 1},  //fuzzy row
new byte[]{0, 0, 1, 1},  //fuzzy mask (info)
new byte[]{0, 1, 3, 2},  //current row
new byte[]{1, 1, 1, 1}); //next row (expected). FAILURE: There is a 
potential match at {1,1,0,0} and has been ignored by the FuzzyRowFilter.
{code}

 FuzzyRow filter getNextForFuzzyRule not working properly for special case
 -

 Key: HBASE-9331
 URL: https://issues.apache.org/jira/browse/HBASE-9331
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.11
 Environment: Issue is not dependent upon environment.
Reporter: Tony Dean
 Attachments: TestFuzzyRowFilter.patch


 The case that getNextForFuzzyRule() fails is when the result (fuzzy key) is 
 extended in length (zero padded) to match the current row for comparisons.  
 If the hint is returned with zero padded bytes, the next seek may skip over a 
 valid match.  See the example below.
 /**
  * The code below circumvents the following situation.
  * 
  * fuzzy.key  = visitor,session,Logon
  * fuzzy.mask = 11100
  * 
  * example hbase row data:
  * visitor,session,AddToCart
  *   FacebookLike
  *   Logon
  *   MobileSpecial
  *...
  *
  * 
  * For row visitor,sessionAddToCart, the current code would
  * return a hint of visitor,session,Logon\0\0\0\0 (zero padded).
  * The next seek would skip visitor,session,Logon and jump
  * to visitor,session,MobileSpecial.
  */
 
 // trim trailing zeros that were not part of the original fuzzy key
 int i = result.length;
 for (; i  fuzzyKeyBytes.length; i--)
 {
   if (result[i-1] != 0x00)
   break;
 }
 if (i != result.length)
 {
   result = Arrays.copyOf(result, i);
 }
 The code above added to the end of getNextFuzzyRule() will circumvent the 
 issue.  I tested my scenario and it produces the correct results.  There may 
 be a better solution.
 Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-23 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13748349#comment-13748349
 ] 

Vasu Mariyala commented on HBASE-8930:
--

The test failure in the build 6853 is unrelated to the patch and it failed due 
to the time out of the operation. However, the build 6852 was successful for 
the same patch (HBASE-8930-rev2.patch).

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-23 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-7709-rev4.patch
0.95-trunk-rev3.patch

Thanks [~yuzhih...@gmail.com] for the review. Yes, I feel clusterIds may be 
better than just clusters. Attached the patches HBASE-7709-rev4.patch (0.94) 
and 0.95-trunk-rev3.patch (0.95  trunk) which contain the method name changes.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: 0.95-trunk-rev3.patch, HBASE-7709-rev4.patch, 
 HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-23 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: 0.95-trunk-rev3.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-23 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-7709-rev4.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-23 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13748855#comment-13748855
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Ignore the last comment as it was supposed to be in a different jira. Sorry 
about that.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 

[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-23 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: 0.95-trunk-rev3.patch
HBASE-7709-rev4.patch

Thanks [~yuzhih...@gmail.com] for the review. Yes, I feel clusterIds may be 
better than just clusters. Attached the patches HBASE-7709-rev4.patch (0.94) 
and 0.95-trunk-rev3.patch (0.95  trunk) which contain the method name changes.

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, HBASE-7709.patch, 
 HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, 
 HBASE-7709-rev4.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-23 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13749081#comment-13749081
 ] 

Vasu Mariyala commented on HBASE-7709:
--

[~jeffreyz] This cluster information is only stored as part of the HLog and it 
gets rolled. So do you think it is the place from where we read the information 
about the originating cluster to build such metrics?

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, HBASE-7709.patch, 
 HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, 
 HBASE-7709-rev4.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-23 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: HBASE-7709-rev5.patch
0.95-trunk-rev4.patch

Attaching the patches 0.95-trunk-rev4.patch (0.95 and trunk) which stores the 
clusters as a list rather than set. The first cluster id in the list is the 
originating cluster and the subsequent entries indicate replication path.

The patch HBASE-7709-rev5.patch (0.94) has the changes to ensure the api of 
0.94 is the same as the api of 0.95 and trunk.

These patches primarily address the monitoring issues mentioned by [~jeffreyz]

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-22 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev2.patch

Thanks [~yuzhih...@gmail.com] for the review. Attached the patch 
HBASE-8930-rev2.patch which addresses the comments made by you.

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-22 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930-rev2.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-22 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev2.patch

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-22 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13748278#comment-13748278
 ] 

Vasu Mariyala commented on HBASE-8930:
--

[~yuzhih...@gmail.com] Can you please start a PreCommit-HBase build for the 
patch HBASE-8930-rev2.patch?

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch, 
 HBASE-8930-rev2.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930.patch)

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 System.out.println(get);
 Scan scan = new Scan(get);
 

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930.patch

Reattaching the same patch to try to get the Hadoop QA run

 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Attachments: HBASE-8930.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 put.add(cf, col1, Bytes.toBytes((short) 1));
 put.add(cf, col2, Bytes.toBytes((short) 1));
 put.add(cf, col3, Bytes.toBytes((short) 3));
 put.add(cf, col4, Bytes.toBytes((short) 3));
 put.add(cf, col5, Bytes.toBytes((short) 3));
 put.add(cf, col6, Bytes.toBytes((short) 3));
 hTable.put(put);
 put = new Put(row);
 put.add(cf, col1v, Bytes.toBytes((short) 10));
 put.add(cf, col2v, Bytes.toBytes((short) 10));
 put.add(cf, col3v, Bytes.toBytes((short) 10));
 put.add(cf, col4v, Bytes.toBytes((short) 10));
 put.add(cf, col5v, Bytes.toBytes((short) 10));
 put.add(cf, col6v, Bytes.toBytes((short) 10));
 hTable.put(put);
 hTable.flushCommits();
 //==READING=//
 Filter allwaysNextColFilter = new AllwaysNextColFilter();
 Get get = new Get(row);
 get.addColumn(cf, col1); //5581
 get.addColumn(cf, col1v); //5584
 get.addColumn(cf, col1g); //5586
 get.addColumn(cf, col2); //5591
 get.addColumn(cf, col2v); //5594
 get.addColumn(cf, col2g); //5596
 
 get.setFilter(allwaysNextColFilter);
 get.setMaxVersions(1);
 

[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: HBASE-7709-rev3.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: (was: HBASE-7709-rev3.patch)

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: HBASE-7709-rev3.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, 
 HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-7709:
-

Attachment: 0.95-trunk-rev2.patch

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, 
 HBASE-7709-rev2.patch, HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746781#comment-13746781
 ] 

Vasu Mariyala commented on HBASE-7709:
--

Attached the patches for 0.94 (HBASE-7709-rev3.patch) and 0.95, 
trunk(0.95-trunk-rev2.patch) which addresses the nits mentioned by Lars

0.94

   a) Changed PREFIX_CLUSTER_KEY to '.' (period as the column family names 
can't start with it)

   b) PREFIX_CONSUMED_CLUSTER_IDS changed to _cs.id

   c) A comment has been added in WALEdit mentioning that it is done for 
backwards compatibility and has been removed in 0.95.2+ releases

trunk/0.95
 
  a) From protobuf documentation 

 repeated: this field can be repeated any number of times (including zero) 
in a well-formed message. The order of the repeated values will be preserved..
 optional: a well-formed message can have zero or one of this field (but 
not more than one).

  So does repeated imply it is optional? Also, from the WALProtos.java the 
clusters list is initialized to empty list in the initFields() method so we 
would not get any   NullPointerException. May be, I would do more research on 
this.

  b) clusters in Import has been changed to use singleton

  c) addClusters has a method public Builder 
addClusters(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.UUID value) 
which takes the UUID as the parameter.

  d) Yes, this is used only to read the older log entries when migrating from 
0.94 to 0.95.2.


 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, 
 HBASE-7709-rev2.patch, HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13746838#comment-13746838
 ] 

Vasu Mariyala commented on HBASE-7709:
--

[~v.himanshu] There is no re-ordering done with the patch. The entry 
custom_entry_type is and was a commented one. I changed the number to 9 just 
incase if some one un-comments it in the future. Please let me know if I miss 
anything

 Infinite loop possible in Master/Master replication
 ---

 Key: HBASE-7709
 URL: https://issues.apache.org/jira/browse/HBASE-7709
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.94.6, 0.95.1
Reporter: Lars Hofhansl
Assignee: Vasu Mariyala
 Fix For: 0.98.0, 0.94.12, 0.96.0

 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 
 0.95-trunk-rev2.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, 
 HBASE-7709-rev2.patch, HBASE-7709-rev3.patch


  We just discovered the following scenario:
 # Cluster A and B are setup in master/master replication
 # By accident we had Cluster C replicate to Cluster A.
 Now all edit originating from C will be bouncing between A and B. Forever!
 The reason is that when the edit come in from C the cluster ID is already set 
 and won't be reset.
 We have a couple of options here:
 # Optionally only support master/master (not cycles of more than two 
 clusters). In that case we can always reset the cluster ID in the 
 ReplicationSource. That means that now cycles  2 will have the data cycle 
 forever. This is the only option that requires no changes in the HLog format.
 # Instead of a single cluster id per edit maintain a (unordered) set of 
 cluster id that have seen this edit. Then in ReplicationSource we drop any 
 edit that the sink has seen already. The is the cleanest approach, but it 
 might need a lot of data stored per edit if there are many clusters involved.
 # Maintain a configurable counter of the maximum cycle side we want to 
 support. Could default to 10 (even maybe even just). Store a hop-count in the 
 WAL and the ReplicationSource increases that hop-count on each hop. If we're 
 over the max, just drop the edit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >