[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-12 Thread ohadshacham
Github user ohadshacham commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r167501024
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -850,19 +849,12 @@ private void addCoprocessors(byte[] tableName, 
HTableDescriptor descriptor, PTab
 && !SchemaUtil.isMetaTable(tableName)
 && !SchemaUtil.isStatsTable(tableName)) {
 if (isTransactional) {
-if 
(!descriptor.hasCoprocessor(PhoenixTransactionalIndexer.class.getName())) {
-
descriptor.addCoprocessor(PhoenixTransactionalIndexer.class.getName(), null, 
priority, null);
-}
--- End diff --

Let's skip this for now and add a release note. 


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166987506
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import 
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
--- End diff --

Might be prudent to implement getAttributes() as being the attributes on 
the first mutation in the batch. I'm not sure if we depend on these, but it 
could potentially get used by PhoenixIndexCodec down the road.


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166986822
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java ---
@@ -1038,32 +1038,21 @@ public void setValue(PColumn column, byte[] 
byteValue) {
 @Override
 public void delete() {
 newMutations();
-// we're using the Tephra column family delete marker here to 
prevent the translation 
-// of deletes to puts by the Tephra's TransactionProcessor
-if (PTableImpl.this.isTransactional()) {
-Put put = new Put(key);
-if (families.isEmpty()) {
-
put.add(SchemaUtil.getEmptyColumnFamily(PTableImpl.this), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-} else {
-for (PColumnFamily colFamily : families) {
-put.add(colFamily.getName().getBytes(), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-}
-}
-deleteRow = put;
--- End diff --

The pendingUpdates represent what is about to be written (i.e. the current 
batch of mutations). I think we need to leave it, though, if we want an old 
client to work with a new server. It doesn't hurt anything as far as I can see.


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166985377
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -850,19 +849,12 @@ private void addCoprocessors(byte[] tableName, 
HTableDescriptor descriptor, PTab
 && !SchemaUtil.isMetaTable(tableName)
 && !SchemaUtil.isStatsTable(tableName)) {
 if (isTransactional) {
-if 
(!descriptor.hasCoprocessor(PhoenixTransactionalIndexer.class.getName())) {
-
descriptor.addCoprocessor(PhoenixTransactionalIndexer.class.getName(), null, 
priority, null);
-}
--- End diff --

The coprocessor is already installed on existing tables. Removing the 
addCoprocessor only impacts a new table being created. Eventually, we could 
have some code that runs at upgrade time which removes the coprocessor from 
existing tables, but we could only do this after we know all clients have been 
upgraded.

If we want to handle the old client, new server situation, it's slightly 
more complicated (but not too bad). We have an optimization that conditionally 
performs an RPC before the batch mutation which contains all the index metadata 
information (if there are more than a threshold number of mutations being 
batched). This information is then cached on the RS and looked up by the UUID 
we store on the Put. That's to prevent having to put information on *every* 
single mutation. So we'd have to add this attribute to the ServerCache or 
conditionally add the attribute to the mutation (depending on if we're doing 
the extra RPC or not).


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread ohadshacham
Github user ohadshacham commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166908718
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import 
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
+
+private static final Log LOG = 
LogFactory.getLog(PhoenixTxnIndexMutationGenerator.class);
+
+private final PhoenixConnection connection;
+private final PhoenixTransactionContext phoenixTransactionContext;
+
+PhoenixTxnIndexMutationGenerator(PhoenixConnection connection, 
PhoenixTransactionContext phoenixTransactionContext) {
+this.phoenixTransactionContext = phoenixTransactionContext;
+this.connection = connection;
+}
+
+private static void addMutation(Map 
mutations, ImmutableBytesPtr row, Mutation m) {
+MultiMutation stored = mutations.get(row);
+// we haven't seen this row before, so add it
+if (stored == null) {
+stored = new MultiMutation(row);
+mutations.put(row, stored);
+}
+stored.addAll(m);
+}
+
+public List getIndexUpdates(final PTable table, PTable 
index, List dataMutations) throws IOException, SQLException {
+
+if (dataMutations.isEmpty()) {
+return new ArrayList();
+}
+
+Map updateAttributes = 
dataMutations.get(0).getAttributesMap();
+boolean replyWrite = 
(BaseScannerRegionObserver.ReplayWrite.fromBytes(updateAttributes.get(BaseScannerRegionObserver.REPLAY_WRITES))
 != null);
+byte[] txRollbackAttribute = 
updateAttributes.get(PhoenixTransactionContext.TX_ROLLBACK_ATTRIBUTE_KEY);
+
+IndexMaintainer maintainer = index.getIndexMaintainer(table, 
connection);
+
   

[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread ohadshacham
Github user ohadshacham commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166876845
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import 
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
--- End diff --

As you saw, I basically imported the code and removed the coprocessor 
specific stuff. While, when needed, finding this stuff at the client side :). 
Basically, I replaced stuff that arrived from the coprocessor environment and 
refactored the code.


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread ohadshacham
Github user ohadshacham commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166875958
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -850,19 +849,12 @@ private void addCoprocessors(byte[] tableName, 
HTableDescriptor descriptor, PTab
 && !SchemaUtil.isMetaTable(tableName)
 && !SchemaUtil.isStatsTable(tableName)) {
 if (isTransactional) {
-if 
(!descriptor.hasCoprocessor(PhoenixTransactionalIndexer.class.getName())) {
-
descriptor.addCoprocessor(PhoenixTransactionalIndexer.class.getName(), null, 
priority, null);
-}
--- End diff --

Maybe I am missing something, but I don't understand why if we upgrade 
server and client together, we still need an empty PhoenixTransactionalIndexer. 
Can't we just remove the addCoprocessor from the code and remove 
PhoenixTransactionalIndexer?

Regarding the case of supporting server upgrade with old clients. Can't we 
just add a new attribute to each mutation denotes that the index maintenance 
performed on the client slide and modify PhoenixTransactionalIndexer to ignore 
mutations (call the super function) for mutations that contains this attribute?


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-08 Thread ohadshacham
Github user ohadshacham commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166868712
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java ---
@@ -1038,32 +1038,21 @@ public void setValue(PColumn column, byte[] 
byteValue) {
 @Override
 public void delete() {
 newMutations();
-// we're using the Tephra column family delete marker here to 
prevent the translation 
-// of deletes to puts by the Tephra's TransactionProcessor
-if (PTableImpl.this.isTransactional()) {
-Put put = new Put(key);
-if (families.isEmpty()) {
-
put.add(SchemaUtil.getEmptyColumnFamily(PTableImpl.this), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-} else {
-for (PColumnFamily colFamily : families) {
-put.add(colFamily.getName().getBytes(), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-}
-}
-deleteRow = put;
--- End diff --

No, since instead of writing directly the family deletion marker we perform 
a regular delete operation using the transaction processor. The transaction 
processor writes this family deletion marker and in here we just check for its 
existence. 


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-07 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166816653
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import 
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
+
+private static final Log LOG = 
LogFactory.getLog(PhoenixTxnIndexMutationGenerator.class);
+
+private final PhoenixConnection connection;
+private final PhoenixTransactionContext phoenixTransactionContext;
+
+PhoenixTxnIndexMutationGenerator(PhoenixConnection connection, 
PhoenixTransactionContext phoenixTransactionContext) {
+this.phoenixTransactionContext = phoenixTransactionContext;
+this.connection = connection;
+}
+
+private static void addMutation(Map 
mutations, ImmutableBytesPtr row, Mutation m) {
+MultiMutation stored = mutations.get(row);
+// we haven't seen this row before, so add it
+if (stored == null) {
+stored = new MultiMutation(row);
+mutations.put(row, stored);
+}
+stored.addAll(m);
+}
+
+public List getIndexUpdates(final PTable table, PTable 
index, List dataMutations) throws IOException, SQLException {
+
+if (dataMutations.isEmpty()) {
+return new ArrayList();
+}
+
+Map updateAttributes = 
dataMutations.get(0).getAttributesMap();
+boolean replyWrite = 
(BaseScannerRegionObserver.ReplayWrite.fromBytes(updateAttributes.get(BaseScannerRegionObserver.REPLAY_WRITES))
 != null);
+byte[] txRollbackAttribute = 
updateAttributes.get(PhoenixTransactionContext.TX_ROLLBACK_ATTRIBUTE_KEY);
+
+IndexMaintainer maintainer = index.getIndexMaintainer(table, 
connection);
+
  

[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-07 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166816466
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import 
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
--- End diff --

I reviewed this, but it's difficult to determine if anything has changed. I 
see that you've removed coprocessor specific stuff like 
RegionCoprocessorEnvironment and the serialized PhoenixIndexMetaData which you 
don't need any more. Also, attributes aren't pass through TxTableState, but it 
seems like we don't rely on this anywhere that I can see. Any other changes 
that you remember that you think I should look at?


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-07 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166809191
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java ---
@@ -1038,32 +1038,21 @@ public void setValue(PColumn column, byte[] 
byteValue) {
 @Override
 public void delete() {
 newMutations();
-// we're using the Tephra column family delete marker here to 
prevent the translation 
-// of deletes to puts by the Tephra's TransactionProcessor
-if (PTableImpl.this.isTransactional()) {
-Put put = new Put(key);
-if (families.isEmpty()) {
-
put.add(SchemaUtil.getEmptyColumnFamily(PTableImpl.this), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-} else {
-for (PColumnFamily colFamily : families) {
-put.add(colFamily.getName().getBytes(), 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker(),
 ts,
-HConstants.EMPTY_BYTE_ARRAY);
-}
-}
-deleteRow = put;
--- End diff --

Does the code in IndexMaintainer here need to change too (i.e. check for 
getFamilyDeleteMarker()), or is it fine?

private DeleteType getDeleteTypeOrNull(Collection 
pendingUpdates, int nCFs) {
int nDeleteCF = 0;
int nDeleteVersionCF = 0;
for (Cell kv : pendingUpdates) {
if (kv.getTypeByte() == 
KeyValue.Type.DeleteFamilyVersion.getCode()) {
nDeleteVersionCF++;
}
else if (kv.getTypeByte() == 
KeyValue.Type.DeleteFamily.getCode()
// Since we don't include the index rows in the 
change set for txn tables, we need to detect row deletes that have transformed 
by TransactionProcessor
|| (CellUtil.matchingQualifier(kv, 
TransactionFactory.getTransactionFactory().getTransactionContext().getFamilyDeleteMarker())
 && CellUtil.matchingValue(kv, HConstants.EMPTY_BYTE_ARRAY))) {
nDeleteCF++;
}
}



---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-02-07 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/291#discussion_r166808341
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -850,19 +849,12 @@ private void addCoprocessors(byte[] tableName, 
HTableDescriptor descriptor, PTab
 && !SchemaUtil.isMetaTable(tableName)
 && !SchemaUtil.isStatsTable(tableName)) {
 if (isTransactional) {
-if 
(!descriptor.hasCoprocessor(PhoenixTransactionalIndexer.class.getName())) {
-
descriptor.addCoprocessor(PhoenixTransactionalIndexer.class.getName(), null, 
priority, null);
-}
--- End diff --

This handles the case of the creation of new tables, but we'd need to 
handle the case of existing tables as well. This is somewhat trickier because 
ideally we'd want to handle the case in which the server jar has been upgraded, 
but the client jar hasn't been. Until the client is upgraded (in which case 
index maintenance will occur on the client side), we'd need to continue 
performing index maintenance through this coprocessor. The somewhat clunky way 
of doing that is to pass the client Phoenix version through the 
PhoenixIndexMetaData. This would be somewhat painful.

Since we've labelled transactions as "beta", perhaps we can punt on this. 
In this case, we'd require both the client and server jar to be upgraded at the 
same time (if transactional tables are already being used). With this approach, 
we'd want to modify PhoenixTransactionalIndexer to be an empty class derived 
still from BaseRegionObserver.


---


[GitHub] phoenix pull request #291: [PHOENIX-4278] Implement pure client side transac...

2018-01-29 Thread ohadshacham
GitHub user ohadshacham opened a pull request:

https://github.com/apache/phoenix/pull/291

[PHOENIX-4278] Implement pure client side transactional index mainten…

…ance.

Moving the index update from a coprocessor to the client side. This is 
done only for transactions case and also aids in updating the shadow cells when 
using Omid.
In this commit, I also changed the way deletion are updated in Tephra 
by using their regular api instead of using Tephra's deletion marker.
This will also aid in updating Omid's shadow cells upon commit.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ohadshacham/phoenix PHOENIX-4278-sq

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #291


commit 5fa8c3b392ba3323c90bf7b3685a4996528c35f6
Author: Ohad Shacham 
Date:   2018-01-29T15:55:11Z

[PHOENIX-4278] Implement pure client side transactional index maintenance.
Moving the index update from a coprocessor to the client side. This is 
done only for transactions case and also aids in updating the shadow cells when 
using Omid.
In this commit, I also changed the way deletion are updated in Tephra 
by using their regular api instead of using Tephra's deletion marker.
This will also aid in updating Omid's shadow cells upon commit.




---