Build failed in Jenkins: Phoenix | Master | Hadoop1 #331

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-master-hadoop1/331/changes

Changes:

[jtaylor] PHOENIX-1047 Auto cast - add/sub decimal constant and integer

--
Started by an SCM change
Started by an SCM change
Building remotely on ubuntu3 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Phoenix-master-hadoop1/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/phoenix.git
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/phoenix.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/master^{commit}
Checking out Revision acd35f0ebbb3cf3151741832dc3f3e08a585318c (origin/master)
  git config core.sparsecheckout
  git checkout -f acd35f0ebbb3cf3151741832dc3f3e08a585318c
  git rev-list c85d4c6ad145babcd5eca2fde1dc632071105b77
No emails were triggered.
FATAL: Unable to produce a script file
java.io.IOException: Failed to create a temp file on 
https://builds.apache.org/job/Phoenix-master-hadoop1/ws/
at hudson.FilePath.createTextTempFile(FilePath.java:1265)
at 
hudson.tasks.CommandInterpreter.createScriptFile(CommandInterpreter.java:144)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:804)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:585)
at hudson.model.Run.execute(Run.java:1676)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.io.IOException: remote file operation failed: 
https://builds.apache.org/job/Phoenix-master-hadoop1/ws/ at 
hudson.remoting.Channel@29e76aff:ubuntu3
at hudson.FilePath.act(FilePath.java:910)
at hudson.FilePath.act(FilePath.java:887)
at hudson.FilePath.createTextTempFile(FilePath.java:1239)
... 12 more
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:316)
at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
at hudson.FilePath$15.invoke(FilePath.java:1258)
at hudson.FilePath$15.invoke(FilePath.java:1239)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2462)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Phoenix | Master | Hadoop1 #330
Archived 573 artifacts
Archive block size is 32768
Received 1069 blocks and 180826303 bytes
Compression is 16.2%
Took 56 sec
Updating PHOENIX-1047
Recording test results


Build failed in Jenkins: Phoenix | 4.0 | Hadoop1 #277

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-4.0-hadoop1/277/changes

Changes:

[jtaylor] PHOENIX-1047 Auto cast - add/sub decimal constant and integer

--
Started by an SCM change
Started by an SCM change
Building remotely on ubuntu3 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Phoenix-4.0-hadoop1/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/phoenix.git
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/phoenix.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/4.0^{commit}
Checking out Revision fcdcd697ccf13cd377980a186c32fb8b4121d1a1 (origin/4.0)
  git config core.sparsecheckout
  git checkout -f fcdcd697ccf13cd377980a186c32fb8b4121d1a1
  git rev-list b07658e8791cebf59ec45beeac56ec7ca5252a4d
No emails were triggered.
FATAL: Unable to produce a script file
java.io.IOException: Failed to create a temp file on 
https://builds.apache.org/job/Phoenix-4.0-hadoop1/ws/
at hudson.FilePath.createTextTempFile(FilePath.java:1265)
at 
hudson.tasks.CommandInterpreter.createScriptFile(CommandInterpreter.java:144)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:804)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:585)
at hudson.model.Run.execute(Run.java:1676)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.io.IOException: remote file operation failed: 
https://builds.apache.org/job/Phoenix-4.0-hadoop1/ws/ at 
hudson.remoting.Channel@29e76aff:ubuntu3
at hudson.FilePath.act(FilePath.java:910)
at hudson.FilePath.act(FilePath.java:887)
at hudson.FilePath.createTextTempFile(FilePath.java:1239)
... 12 more
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:316)
at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
at hudson.FilePath$15.invoke(FilePath.java:1258)
at hudson.FilePath$15.invoke(FilePath.java:1239)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2462)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Phoenix | 4.0 | Hadoop1 #276
Archived 566 artifacts
Archive block size is 32768
Received 1118 blocks and 198799185 bytes
Compression is 15.6%
Took 59 sec
Recording test results


Jenkins build is back to normal : Phoenix | 4.0 | Hadoop2 #36

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-4.0-hadoop2/36/changes



Jenkins build is back to normal : Phoenix | 4.0 | Hadoop1 #278

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-4.0-hadoop1/278/changes



svn commit: r1618279 - in /phoenix/site: publish/ publish/language/ source/src/site/

2014-08-15 Thread mujtaba
Author: mujtaba
Date: Fri Aug 15 21:35:37 2014
New Revision: 1618279

URL: http://svn.apache.org/r1618279
Log:
Fix Phoenix site search URL

Modified:
phoenix/site/publish/Phoenix-in-15-minutes-or-less.html
phoenix/site/publish/array_type.html
phoenix/site/publish/building.html
phoenix/site/publish/building_website.html
phoenix/site/publish/bulk_dataload.html
phoenix/site/publish/contributing.html
phoenix/site/publish/download.html
phoenix/site/publish/dynamic_columns.html
phoenix/site/publish/faq.html
phoenix/site/publish/flume.html
phoenix/site/publish/index.html
phoenix/site/publish/issues.html
phoenix/site/publish/joins.html
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/mailing_list.html
phoenix/site/publish/multi-tenancy.html
phoenix/site/publish/paged.html
phoenix/site/publish/performance.html
phoenix/site/publish/phoenix_on_emr.html
phoenix/site/publish/pig_integration.html
phoenix/site/publish/recent.html
phoenix/site/publish/resources.html
phoenix/site/publish/roadmap.html
phoenix/site/publish/salted.html
phoenix/site/publish/secondary_indexing.html
phoenix/site/publish/sequences.html
phoenix/site/publish/skip_scan.html
phoenix/site/publish/source.html
phoenix/site/publish/team.html
phoenix/site/publish/tracing.html
phoenix/site/publish/tuning.html
phoenix/site/publish/upgrade_from_2_2.html
phoenix/site/publish/views.html
phoenix/site/source/src/site/site.xml

Modified: phoenix/site/publish/Phoenix-in-15-minutes-or-less.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/Phoenix-in-15-minutes-or-less.html?rev=1618279r1=1618278r2=1618279view=diff
==
--- phoenix/site/publish/Phoenix-in-15-minutes-or-less.html (original)
+++ phoenix/site/publish/Phoenix-in-15-minutes-or-less.html Fri Aug 15 21:35:37 
2014
@@ -1,7 +1,7 @@
 
 !DOCTYPE html
 !--
- Generated by Apache Maven Doxia at 2014-08-14
+ Generated by Apache Maven Doxia at 2014-08-15
  Rendered using Reflow Maven Skin 1.1.0 
(http://andriusvelykis.github.io/reflow-maven-skin)
 --
 html  xml:lang=en lang=en
@@ -330,7 +330,7 @@
/ul
/div
div class=span3 bottom-description
-   form 
action=https://www.google.com/search; method=getinput 
value=phoenix.incubator.apache.org name=sitesearch type=hiddeninput 
placeholder=Search the sitehellip; required=required style=width:170px; 
size=18 name=q id=query type=search/form
+   form 
action=https://www.google.com/search; method=getinput 
value=phoenix.apache.org name=sitesearch type=hiddeninput 
placeholder=Search the sitehellip; required=required style=width:170px; 
size=18 name=q id=query type=search/form
/div
/div
/div

Modified: phoenix/site/publish/array_type.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/array_type.html?rev=1618279r1=1618278r2=1618279view=diff
==
--- phoenix/site/publish/array_type.html (original)
+++ phoenix/site/publish/array_type.html Fri Aug 15 21:35:37 2014
@@ -1,7 +1,7 @@
 
 !DOCTYPE html
 !--
- Generated by Apache Maven Doxia at 2014-08-14
+ Generated by Apache Maven Doxia at 2014-08-15
  Rendered using Reflow Maven Skin 1.1.0 
(http://andriusvelykis.github.io/reflow-maven-skin)
 --
 html  xml:lang=en lang=en
@@ -356,7 +356,7 @@ SELECT region_name FROM regions WHERE '9
/ul
/div
div class=span3 bottom-description
-   form 
action=https://www.google.com/search; method=getinput 
value=phoenix.incubator.apache.org name=sitesearch type=hiddeninput 
placeholder=Search the sitehellip; required=required style=width:170px; 
size=18 name=q id=query type=search/form
+   form 
action=https://www.google.com/search; method=getinput 
value=phoenix.apache.org name=sitesearch type=hiddeninput 
placeholder=Search the sitehellip; required=required style=width:170px; 
size=18 name=q id=query type=search/form
/div
/div
/div

Modified: phoenix/site/publish/building.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/building.html?rev=1618279r1=1618278r2=1618279view=diff
==
--- phoenix/site/publish/building.html (original)
+++ phoenix/site/publish/building.html Fri Aug 15 21:35:37 2014
@@ -1,7 +1,7 @@
 
 !DOCTYPE 

git commit: PHOENIX-1174 Rename and move properties using existing convention

2014-08-15 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.0 2a3a253c2 - 5aa381516


PHOENIX-1174 Rename and move properties using existing convention


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5aa38151
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5aa38151
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5aa38151

Branch: refs/heads/4.0
Commit: 5aa38151681b0c9f7f15c7e520f1d1eea4821565
Parents: 2a3a253
Author: James Taylor jamestay...@apache.org
Authored: Fri Aug 15 14:32:14 2014 -0700
Committer: James Taylor jamestay...@apache.org
Committed: Fri Aug 15 14:32:14 2014 -0700

--
 .../phoenix/end2end/index/IndexHandlerIT.java   |  4 ++--
 .../ipc/PhoenixIndexRpcSchedulerFactory.java| 23 +---
 .../org/apache/phoenix/query/QueryServices.java |  9 
 .../phoenix/query/QueryServicesOptions.java | 11 --
 .../org/apache/phoenix/trace/util/Tracing.java  | 18 ++-
 .../PhoenixIndexRpcSchedulerFactoryTest.java|  5 +++--
 6 files changed, 35 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5aa38151/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
index 8536652..1507d6b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
@@ -38,7 +38,7 @@ import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexQosRpcControllerFactory;
 import org.apache.phoenix.hbase.index.TableName;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.Before;
@@ -159,7 +159,7 @@ public class IndexHandlerIT {
 // check the counts on the rpc controller
 assertEquals(Didn't get the expected number of index priority 
writes!, 1,
 (int) CountingIndexClientRpcController.priorityCounts
-
.get(PhoenixIndexRpcSchedulerFactory.DEFAULT_INDEX_MIN_PRIORITY));
+.get(QueryServicesOptions.DEFAULT_INDEX_MIN_PRIORITY));
 
 table.close();
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5aa38151/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
index 500db7c..8e0b86f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
@@ -26,6 +26,8 @@ import org.apache.hadoop.hbase.ipc.RpcScheduler;
 import org.apache.hadoop.hbase.regionserver.RegionServerServices;
 import org.apache.hadoop.hbase.regionserver.RpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.SimpleRpcSchedulerFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 
 import com.google.common.base.Preconditions;
 
@@ -37,21 +39,6 @@ public class PhoenixIndexRpcSchedulerFactory implements 
RpcSchedulerFactory {
 
 private static final Log LOG = 
LogFactory.getLog(PhoenixIndexRpcSchedulerFactory.class);
 
-private static final String INDEX_HANDLER_COUNT_KEY =
-org.apache.phoenix.regionserver.index.handler.count;
-private static final int DEFAULT_INDEX_HANDLER_COUNT = 30;
-
-/**
- * HConstants#HIGH_QOS is the max we will see to a standard table. We go 
higher to differentiate
- * and give some room for things in the middle
- */
-public static final int DEFAULT_INDEX_MIN_PRIORITY = 1000;
-public static final int DEFAULT_INDEX_MAX_PRIORITY = 1050;
-public static final String MIN_INDEX_PRIOIRTY_KEY =
-org.apache.phoenix.regionserver.index.priority.min;
-public static final String MAX_INDEX_PRIOIRTY_KEY =
-org.apache.phoenix.regionserver.index.priority.max;
-
 private static final String VERSION_TOO_OLD_FOR_INDEX_RPC =
 Running an older version of HBase (less than 0.98.4), Phoenix 
index RPC handling cannot be enabled.;
 
@@ -75,9 +62,9 

git commit: PHOENIX-1173: MutableIndexFailureIT.java doesn't finish sometimes or is flappy.

2014-08-15 Thread jeffreyz
Repository: phoenix
Updated Branches:
  refs/heads/3.0 19dc23aa5 - 71cc23c8f


PHOENIX-1173: MutableIndexFailureIT.java doesn't finish sometimes or is flappy.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/71cc23c8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/71cc23c8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/71cc23c8

Branch: refs/heads/3.0
Commit: 71cc23c8fa03694db1816cf8e3f5d0bb3f391ccb
Parents: 19dc23a
Author: Jeffrey Zhong jeffr...@apache.org
Authored: Fri Aug 15 14:02:51 2014 -0700
Committer: Jeffrey Zhong jeffr...@apache.org
Committed: Fri Aug 15 14:02:51 2014 -0700

--
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 2 ++
 .../apache/phoenix/coprocessor/MetaDataRegionObserver.java  | 9 ++---
 2 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/71cc23c8/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 07d4cc8..9cb3b89 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1207,6 +1207,8 @@ public class MetaDataEndpointImpl extends 
BaseEndpointCoprocessor implements Met
 dataTableKey = SchemaUtil.getTableKey(tenantId, 
schemaName, dataTableKV.getValue());
 }
 if(dataTableKey != null) {
+// make a copy of tableMetadata
+tableMetadata = new ArrayListMutation(tableMetadata);
 // insert an empty KV to trigger time stamp update on 
data table row
 Put p = new Put(dataTableKey);
 p.add(TABLE_FAMILY_BYTES, 
QueryConstants.EMPTY_COLUMN_BYTES, timeStamp, ByteUtil.EMPTY_BYTE_ARRAY);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/71cc23c8/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
index 2820e59..1526a98 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
@@ -24,6 +24,9 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Timer;
 import java.util.TimerTask;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.logging.Log;
@@ -64,14 +67,14 @@ import org.apache.phoenix.util.SchemaUtil;
  */
 public class MetaDataRegionObserver extends BaseRegionObserver {
 public static final Log LOG = 
LogFactory.getLog(MetaDataRegionObserver.class);
-protected Timer scheduleTimer = new Timer(true);
+protected ScheduledThreadPoolExecutor executor = new 
ScheduledThreadPoolExecutor(1);
 private boolean enableRebuildIndex = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_HANDLING_REBUILD;
 private long rebuildIndexTimeInterval = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_HANDLING_REBUILD_INTERVAL;
   
 @Override
 public void preClose(final ObserverContextRegionCoprocessorEnvironment c,
 boolean abortRequested) {
-scheduleTimer.cancel();
+executor.shutdownNow();
 
GlobalCache.getInstance(c.getEnvironment()).getMetaDataCache().invalidateAll();
 }
 
@@ -112,7 +115,7 @@ public class MetaDataRegionObserver extends 
BaseRegionObserver {
 // starts index rebuild schedule work
 BuildIndexScheduleTask task = new 
BuildIndexScheduleTask(e.getEnvironment());
 // run scheduled task every 10 secs
-scheduleTimer.schedule(task, 1, rebuildIndexTimeInterval);
+executor.scheduleAtFixedRate(task, 1, 
rebuildIndexTimeInterval, TimeUnit.MILLISECONDS);
 } catch (ClassNotFoundException ex) {
 LOG.error(BuildIndexScheduleTask cannot start!, ex);
 }



git commit: PHOENIX-1174 Rename and move properties using existing convention

2014-08-15 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master dfcde1046 - 367662dc8


PHOENIX-1174 Rename and move properties using existing convention


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/367662dc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/367662dc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/367662dc

Branch: refs/heads/master
Commit: 367662dc884433cd3e626b65e4417716966062fb
Parents: dfcde10
Author: James Taylor jamestay...@apache.org
Authored: Fri Aug 15 14:32:14 2014 -0700
Committer: James Taylor jamestay...@apache.org
Committed: Fri Aug 15 14:36:49 2014 -0700

--
 .../phoenix/end2end/index/IndexHandlerIT.java   |  4 ++--
 .../ipc/PhoenixIndexRpcSchedulerFactory.java| 23 +---
 .../org/apache/phoenix/query/QueryServices.java |  9 
 .../phoenix/query/QueryServicesOptions.java | 11 --
 .../org/apache/phoenix/trace/util/Tracing.java  | 18 ++-
 .../PhoenixIndexRpcSchedulerFactoryTest.java|  5 +++--
 6 files changed, 35 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/367662dc/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
index 8536652..1507d6b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexHandlerIT.java
@@ -38,7 +38,7 @@ import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.IndexQosRpcControllerFactory;
 import org.apache.phoenix.hbase.index.TableName;
-import org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory;
+import org.apache.phoenix.query.QueryServicesOptions;
 import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.Before;
@@ -159,7 +159,7 @@ public class IndexHandlerIT {
 // check the counts on the rpc controller
 assertEquals(Didn't get the expected number of index priority 
writes!, 1,
 (int) CountingIndexClientRpcController.priorityCounts
-
.get(PhoenixIndexRpcSchedulerFactory.DEFAULT_INDEX_MIN_PRIORITY));
+.get(QueryServicesOptions.DEFAULT_INDEX_MIN_PRIORITY));
 
 table.close();
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/367662dc/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
index 500db7c..8e0b86f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/ipc/PhoenixIndexRpcSchedulerFactory.java
@@ -26,6 +26,8 @@ import org.apache.hadoop.hbase.ipc.RpcScheduler;
 import org.apache.hadoop.hbase.regionserver.RegionServerServices;
 import org.apache.hadoop.hbase.regionserver.RpcSchedulerFactory;
 import org.apache.hadoop.hbase.regionserver.SimpleRpcSchedulerFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
 
 import com.google.common.base.Preconditions;
 
@@ -37,21 +39,6 @@ public class PhoenixIndexRpcSchedulerFactory implements 
RpcSchedulerFactory {
 
 private static final Log LOG = 
LogFactory.getLog(PhoenixIndexRpcSchedulerFactory.class);
 
-private static final String INDEX_HANDLER_COUNT_KEY =
-org.apache.phoenix.regionserver.index.handler.count;
-private static final int DEFAULT_INDEX_HANDLER_COUNT = 30;
-
-/**
- * HConstants#HIGH_QOS is the max we will see to a standard table. We go 
higher to differentiate
- * and give some room for things in the middle
- */
-public static final int DEFAULT_INDEX_MIN_PRIORITY = 1000;
-public static final int DEFAULT_INDEX_MAX_PRIORITY = 1050;
-public static final String MIN_INDEX_PRIOIRTY_KEY =
-org.apache.phoenix.regionserver.index.priority.min;
-public static final String MAX_INDEX_PRIOIRTY_KEY =
-org.apache.phoenix.regionserver.index.priority.max;
-
 private static final String VERSION_TOO_OLD_FOR_INDEX_RPC =
 Running an older version of HBase (less than 0.98.4), Phoenix 
index RPC handling cannot be enabled.;
 
@@ -75,9 

git commit: PHOENIX-1173: MutableIndexFailureIT.java doesn't finish sometimes or is flappy.

2014-08-15 Thread jeffreyz
Repository: phoenix
Updated Branches:
  refs/heads/master 367662dc8 - ebb6a7adb


PHOENIX-1173: MutableIndexFailureIT.java doesn't finish sometimes or is flappy.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ebb6a7ad
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ebb6a7ad
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ebb6a7ad

Branch: refs/heads/master
Commit: ebb6a7adb9134eb2413950796bd5f4e80a250e7d
Parents: 367662d
Author: Jeffrey Zhong jeffr...@apache.org
Authored: Fri Aug 15 14:02:51 2014 -0700
Committer: Jeffrey Zhong jeffr...@apache.org
Committed: Fri Aug 15 16:05:55 2014 -0700

--
 .../apache/phoenix/coprocessor/MetaDataEndpointImpl.java| 2 ++
 .../apache/phoenix/coprocessor/MetaDataRegionObserver.java  | 9 ++---
 2 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ebb6a7ad/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index b99483b..5b43a90 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1508,6 +1508,8 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 dataTableKey = SchemaUtil.getTableKey(tenantId, 
schemaName, dataTableKV.getValue());
 }
 if(dataTableKey != null) {
+// make a copy of tableMetadata
+tableMetadata = new ArrayListMutation(tableMetadata);
 // insert an empty KV to trigger time stamp update on 
data table row
 Put p = new Put(dataTableKey);
 p.add(TABLE_FAMILY_BYTES, 
QueryConstants.EMPTY_COLUMN_BYTES, timeStamp, ByteUtil.EMPTY_BYTE_ARRAY);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ebb6a7ad/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
index 6ce0148..822ced8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
@@ -28,6 +28,9 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Timer;
 import java.util.TimerTask;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.commons.logging.Log;
@@ -65,14 +68,14 @@ import org.apache.phoenix.util.SchemaUtil;
  */
 public class MetaDataRegionObserver extends BaseRegionObserver {
 public static final Log LOG = 
LogFactory.getLog(MetaDataRegionObserver.class);
-protected Timer scheduleTimer = new Timer(true);
+protected ScheduledThreadPoolExecutor executor = new 
ScheduledThreadPoolExecutor(1);
 private boolean enableRebuildIndex = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_HANDLING_REBUILD;
 private long rebuildIndexTimeInterval = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_HANDLING_REBUILD_INTERVAL;
   
 @Override
 public void preClose(final ObserverContextRegionCoprocessorEnvironment c,
 boolean abortRequested) {
-scheduleTimer.cancel();
+executor.shutdownNow();
 
GlobalCache.getInstance(c.getEnvironment()).getMetaDataCache().invalidateAll();
 }
 
@@ -113,7 +116,7 @@ public class MetaDataRegionObserver extends 
BaseRegionObserver {
 // starts index rebuild schedule work
 BuildIndexScheduleTask task = new 
BuildIndexScheduleTask(e.getEnvironment());
 // run scheduled task every 10 secs
-scheduleTimer.schedule(task, 1, rebuildIndexTimeInterval);
+executor.scheduleAtFixedRate(task, 1, 
rebuildIndexTimeInterval, TimeUnit.MILLISECONDS);
 } catch (ClassNotFoundException ex) {
 LOG.error(BuildIndexScheduleTask cannot start!, ex);
 }



Build failed in Jenkins: Phoenix | 4.0 | Hadoop1 #279

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-4.0-hadoop1/279/changes

Changes:

[jamestaylor] PHOENIX-1174 Rename and move properties using existing convention

[jeffreyz] PHOENIX-1173: MutableIndexFailureIT.java doesn't finish sometimes or 
is flappy.

--
[...truncated 459 lines...]
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.033 sec - in 
org.apache.phoenix.end2end.ToCharFunctionIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.375 sec - 
in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.MultiCfQueryExecIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.723 sec - in 
org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CoalesceFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.728 sec - in 
org.apache.phoenix.end2end.CoalesceFunctionIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.248 sec - in 
org.apache.phoenix.end2end.MultiCfQueryExecIT
Running org.apache.phoenix.end2end.IsNullIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.81 sec - in 
org.apache.phoenix.end2end.IsNullIT
Running org.apache.phoenix.end2end.StddevIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.213 sec - 
in org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.441 sec - 
in org.apache.phoenix.end2end.VariableLengthPKIT
Running org.apache.phoenix.end2end.ArrayIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.907 sec - in 
org.apache.phoenix.end2end.StddevIT
Running org.apache.phoenix.end2end.GroupByCaseIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.415 sec - in 
org.apache.phoenix.end2end.GroupByCaseIT
Running org.apache.phoenix.end2end.SpooledOrderByIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.651 sec - 
in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.247 sec - in 
org.apache.phoenix.end2end.SpooledOrderByIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.682 sec - 
in org.apache.phoenix.end2end.UpsertSelectIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.8 sec - in 
org.apache.phoenix.end2end.CreateTableIT
Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.586 sec - 
in org.apache.phoenix.end2end.ArrayIT
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.708 sec - 
in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.933 sec - 
in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 91, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.776 sec - 
in org.apache.phoenix.end2end.GroupByIT
Tests run: 182, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.872 sec - 
in org.apache.phoenix.end2end.QueryIT
Tests run: 203, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.341 sec - 
in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT

Results :

Tests run: 1268, Failures: 0, Errors: 0, Skipped: 3

[INFO] 
[INFO] --- maven-failsafe-plugin:2.17:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] Failsafe report directory: 
/x1/jenkins/jenkins-slave/workspace/Phoenix-4.0-hadoop1/phoenix-core/target/failsafe-reports
[INFO] parallel='none', perCoreThreadCount=true, threadCount=0, 
useUnlimitedThreads=false, threadCountSuites=0, threadCountClasses=0, 
threadCountMethods=0, parallelOptimized=true

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.phoenix.end2end.QueryExecWithoutSCNIT
Running org.apache.phoenix.end2end.TenantSpecificViewIndexIT
Running org.apache.phoenix.end2end.PhoenixEncodeDecodeIT
Running org.apache.phoenix.end2end.DeleteIT
Running org.apache.phoenix.end2end.BinaryRowKeyIT
Running org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.229 sec - in 
org.apache.phoenix.end2end.BinaryRowKeyIT
Running org.apache.phoenix.end2end.EncodeFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.065 sec - in 
org.apache.phoenix.end2end.QueryExecWithoutSCNIT
Running org.apache.phoenix.end2end.TimezoneOffsetFunctionIT

Jenkins build is back to normal : Phoenix | 3.0 | Hadoop1 #187

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Phoenix-3.0-hadoop1/187/changes