Error: Malformed connection url
Hi, I use Hadoop 2.4.1 Cluster and HBase Cluster (0.98.6), my HBASE rootdir is hdfs://my_cluster/hbase hbase-site.xml hbase.rootdir hdfs://my_cluster/hbase hdfs-site.xml dfs.nameservices my_cluster true The hbase shell works well, when I tired Phoenix I got the following error: ./sqlline.py z1:2181,z2:2181,z3:2181:/my_cluster/hbase Error: ERROR 102 (08001): Malformed connection url. jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase (state=08001,code=102) java.sql.SQLException: ERROR 102 (08001): Malformed connection url. jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:333) at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:183) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:238) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:144) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650) at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701) at sqlline.SqlLine$Commands.connect(SqlLine.java:3942) at sqlline.SqlLine$Commands.connect(SqlLine.java:3851) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810) at sqlline.SqlLine.dispatch(SqlLine.java:817) at sqlline.SqlLine.initArgs(SqlLine.java:633) at sqlline.SqlLine.begin(SqlLine.java:680) at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441) at sqlline.SqlLine.main(SqlLine.java:424) sqlline version 1.1.2 list; No current connection Question: What should be the correct connection URL if it is Hadoop2 and in HA Cluster mode? Regards Arthur
Re: Error: Malformed connection url
Hi, Thank you so much! I am new to phoenix, could you advice where to set the JDBC connector string in Phoenix? is it in some configuration of Phoenix? regards Arthur and include the port number in the JDBC connector string: jdbc:phoenix [ : [: ] [ :/hbase ]] On 8 Oct, 2014, at 9:28 pm, yeshwanth kumar wrote: > Hi arthur, > > its not related to hdfs or hbase path. > U need to specify the zookeper quorum > > jdbc:phoenix [ : [ : ] [ : ] ] > > > > -Yeshwanth > Can you Imagine what I would do if I could do all I can - Art of War > > On Wed, Oct 8, 2014 at 5:42 PM, [email protected] > wrote: > Hi, > > I use Hadoop 2.4.1 Cluster and HBase Cluster (0.98.6), my HBASE rootdir is > hdfs://my_cluster/hbase > > hbase-site.xml > > hbase.rootdir > hdfs://my_cluster/hbase > > > hdfs-site.xml > > dfs.nameservices > my_cluster > true > > > > The hbase shell works well, when I tired Phoenix I got the following error: > > ./sqlline.py z1:2181,z2:2181,z3:2181:/my_cluster/hbase > > Error: ERROR 102 (08001): Malformed connection url. > jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase (state=08001,code=102) > java.sql.SQLException: ERROR 102 (08001): Malformed connection url. > jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase > at > org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:333) > at > org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:183) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:238) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:144) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) > at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650) > at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701) > at sqlline.SqlLine$Commands.connect(SqlLine.java:3942) > at sqlline.SqlLine$Commands.connect(SqlLine.java:3851) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810) > at sqlline.SqlLine.dispatch(SqlLine.java:817) > at sqlline.SqlLine.initArgs(SqlLine.java:633) > at sqlline.SqlLine.begin(SqlLine.java:680) > at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441) > at sqlline.SqlLine.main(SqlLine.java:424) > sqlline version 1.1.2 > > list; > No current connection > > Question: What should be the correct connection URL if it is Hadoop2 and in > HA Cluster mode? > > Regards > Arthur >
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform
Hi, I have two questions: Q1: When trying sqlline, I got the following warning message, please advise how to resolve it? Q2: How to exit sqlline command shell? I tried “exit;” and “quit;” but no luck. I am using Hadoop 2.4.1 with HA and Hbase 0.98.6. regards Arthur ./sqlline.py z1:/hbase Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:z1:/hbase none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix:z1:/hbase 14/10/09 08:09:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/09 08:09:35 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://my_cluster/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287) at org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:202) at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:858) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:663) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:415) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:310) at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47) at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:235) at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:147) at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1510) at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1489) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1489) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650) at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701) at sqlline.SqlLine$Commands.connect(SqlLine.java:3942) at sqlline.SqlLine$Commands.connect(SqlLine.java:3851) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810) at sqlline.SqlLine.dispatch(SqlLine.java:817) at sqlline.SqlLine.initArgs(SqlLine.java:633) at sqlline.SqlLine.begin(SqlLine.java:680) at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441) at sqlline.SqlLine.main(SqlLine.java:424) 14/10/09 08:09:35 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
Re: Error: Malformed connection url
Hi, I have managed to resolve the issue. Regards Arthur On 9 Oct, 2014, at 7:34 am, [email protected] wrote: > Hi, > > Thank you so much! > > I am new to phoenix, could you advice where to set the JDBC connector string > in Phoenix? is it in some configuration of Phoenix? > > regards > Arthur > > > and include the port number in the JDBC connector string: > > jdbc:phoenix [ : [: ] [ :/hbase ]] > > > > > > > On 8 Oct, 2014, at 9:28 pm, yeshwanth kumar wrote: > >> Hi arthur, >> >> its not related to hdfs or hbase path. >> U need to specify the zookeper quorum >> >> jdbc:phoenix [ : [ : ] [ : ] ] >> >> >> >> -Yeshwanth >> Can you Imagine what I would do if I could do all I can - Art of War >> >> On Wed, Oct 8, 2014 at 5:42 PM, [email protected] >> wrote: >> Hi, >> >> I use Hadoop 2.4.1 Cluster and HBase Cluster (0.98.6), my HBASE rootdir is >> hdfs://my_cluster/hbase >> >> hbase-site.xml >> >> hbase.rootdir >> hdfs://my_cluster/hbase >> >> >> hdfs-site.xml >> >> dfs.nameservices >> my_cluster >> true >> >> >> >> The hbase shell works well, when I tired Phoenix I got the following error: >> >> ./sqlline.py z1:2181,z2:2181,z3:2181:/my_cluster/hbase >> >> Error: ERROR 102 (08001): Malformed connection url. >> jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase (state=08001,code=102) >> java.sql.SQLException: ERROR 102 (08001): Malformed connection url. >> jdbc:phoenix:z1:2181,z2:2181,z3:2181:/my_cluster/hbase >> at >> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:333) >> at >> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133) >> at >> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:183) >> at >> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:238) >> at >> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:144) >> at >> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129) >> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) >> at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4650) >> at sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4701) >> at sqlline.SqlLine$Commands.connect(SqlLine.java:3942) >> at sqlline.SqlLine$Commands.connect(SqlLine.java:3851) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810) >> at sqlline.SqlLine.dispatch(SqlLine.java:817) >> at sqlline.SqlLine.initArgs(SqlLine.java:633) >> at sqlline.SqlLine.begin(SqlLine.java:680) >> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441) >> at sqlline.SqlLine.main(SqlLine.java:424) >> sqlline version 1.1.2 >> >> list; >> No current connection >> >> Question: What should be the correct connection URL if it is Hadoop2 and in >> HA Cluster mode? >> >> Regards >> Arthur >> >
WARN impl.MetricsConfig: Cannot locate configuration
Hi, I am trying Phoenix Performance test script There are 4 warnings 1) WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties Question: How to resolve it? 2) WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://my_cluster/hbase/lib, ignored May safely ignore this warnings, am I correct? 3) WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable May safely ignore this warnings, am I correct? Regards Arthur (FYI, full output) ./performance.py z1,z2,z3:/hbase 1000 Phoenix Performance Evaluation Script 1.0 - Creating performance table... 14/10/09 09:32:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/09 09:32:19 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://my_cluster/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287) at org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:202) at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:858) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:663) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:415) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:310) at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47) at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:235) at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(ConnectionQueryServicesImpl.java:147) at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1510) at org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQueryServicesImpl.java:1489) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1489) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:129) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:138) 14/10/09 09:32:19 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties no rows upserted Time: 0.167 sec(s) Query # 1 - Count - SELECT COUNT(1) FROM PERFORMANCE_1000; Query # 2 - Group By First PK - SELECT HOST FROM PERFORMANCE_1000 GROUP BY HOST; Query # 3 - Group By Second PK - SELECT DOMAIN FROM PERFORMANCE_1000 GROUP BY DOMAIN; Query # 4 - Truncate + Group By - SELECT TRUNC(DATE,'DAY') DAY FROM PERFORMANCE_1000 GROUP BY TRUNC(DATE,'DAY'); Query # 5 - Filter + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE CORE<10; Generating and upserting data... . 14/10/09 09:32:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/10/09 09:32:20 WARN u
How to change default field delimiter from COMMA to SEMICOLON
Hi, My CSV file uses semicolon as field delimiter, I tried to use -d ; but failed. 1) without -d parameter ./psql.py z1:/hbase -t NATION ../sample/NATION.csv 14/10/09 11:14:21 ERROR util.CSVCommonsLoader: Error upserting record [19;"SAUDI ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle silent permanent sauternes-- decoys hang slowly into the sentiments! forges toward"]: java.lang.NumberFormatException: For input string: "19;"SAUDI ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle silent permanent sauternes-- decoys hang slowly into the sentiments! forges toward"" 2) with -d parameter ./psql.py z1:/hbase -t NATION ../sample/NATION.csv -d ; ... -d,--delimiter Field delimiter for CSV loader. A digit is interpreted as 1 -> ctrl A, 2 -> ctrl B ... 9 -> ctrl I. -e,--escape-characterEscape character for CSV loader. A digit is interpreted as a control character CSV sample data: 0;"ARGENTINA";1;"ironic regular realms through the idly thin sauternes could eat boldly regular daring warthogs-- daringly idle somas could have to lo" 1;"BRAZIL";1;"silently quiet realms haggle boldly slow ruthless platelets? even i" 2;"CANADA";1;"fluffy pinto beans until the asymptotes doze slowly even epitaphs! doggedly busy excuses sublate carefully: quiet brave asymptotes boost sometimes on th" 9;"IRAN";4;"warthogs could poach even forges? bold bold attainments among the idly permanent warhorses are permanently in place of the bravely fu" 10;"IRAQ";4;"blithe excuses should have to believe; silent busy notornis print toward the slowly furious theodolites. even platelets serve bold ruthless tithes? shea" 11;"JAPAN";2;"dolphins can nag! enticingly bold warhorses will unwind never past the grouches; ironic quick s" Q: How to change default field delimiter from COMMA to SEMICOLON in psql.py command line? Regards Arthur
Re: How to change default field delimiter from COMMA to SEMICOLON
Hi, This is one of the option. But I wish to know what the correct -d parameter should be in command line (the testing data set is of TB size) regards Arthur On 9 Oct, 2014, at 1:06 pm, [email protected] wrote: > Maybe running a program to modify your CSV file to replace any SEMICOLON with > COMMA shall be more convinient. > > > > From: [email protected] > Date: 2014-10-09 11:26 > To: user > CC: [email protected] > Subject: How to change default field delimiter from COMMA to SEMICOLON > Hi, > > My CSV file uses semicolon as field delimiter, I tried to use -d ; but > failed. > > > 1) without -d parameter > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv > 14/10/09 11:14:21 ERROR util.CSVCommonsLoader: Error upserting record > [19;"SAUDI ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle > silent permanent sauternes-- decoys hang slowly into the sentiments! forges > toward"]: java.lang.NumberFormatException: For input string: "19;"SAUDI > ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle silent > permanent sauternes-- decoys hang slowly into the sentiments! forges toward"" > > > 2) with -d parameter > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv -d ; > > ... > -d,--delimiter Field delimiter for CSV loader. A digit is >interpreted as 1 -> ctrl A, 2 -> ctrl B ... >9 -> ctrl I. > -e,--escape-characterEscape character for CSV loader. A digit is >interpreted as a control character > > > > CSV sample data: > 0;"ARGENTINA";1;"ironic regular realms through the idly thin sauternes could > eat boldly regular daring warthogs-- daringly idle somas could have to lo" > 1;"BRAZIL";1;"silently quiet realms haggle boldly slow ruthless platelets? > even i" > 2;"CANADA";1;"fluffy pinto beans until the asymptotes doze slowly even > epitaphs! doggedly busy excuses sublate carefully: quiet brave asymptotes > boost sometimes on th" > 9;"IRAN";4;"warthogs could poach even forges? bold bold attainments among the > idly permanent warhorses are permanently in place of the bravely fu" > 10;"IRAQ";4;"blithe excuses should have to believe; silent busy notornis > print toward the slowly furious theodolites. even platelets serve bold > ruthless tithes? shea" > 11;"JAPAN";2;"dolphins can nag! enticingly bold warhorses will unwind never > past the grouches; ironic quick s" > > > Q: How to change default field delimiter from COMMA to SEMICOLON in psql.py > command line? > > Regards > Arthur
SALT_BUCKETS
Hi, I plan to use Phoenix to load in a very large transaction detail table (about 1TB), my HBASE cluster has 5 nodes, I plan to create a SALTED TABLE for it. 2 questions: Q1) Can anyone suggest how to determine the proper value for SALT_BUCKETS? Q2) In my case here, if I set SALT_BUCKETS = 5 (same number of HBase node), is it a proper setting? Regards Arthur
Re: How to change default field delimiter from COMMA to SEMICOLON
Hi, Thank you so much! Arthur On 9 Oct, 2014, at 3:45 pm, [email protected] wrote: > Hi, Arthur > I tested CSV load through psql and found the following delimeter works. > Possibly SEMICOLON need an ESCAPE character. > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv -d "\;" > > Sun > > > From: Gabriel Reid > Date: 2014-10-09 15:17 > To: user > Subject: Re: How to change default field delimiter from COMMA to SEMICOLON > Hi, > > You've got the usage of the command correct there, but the semi-colon > character has a special meaning in most shells. Wrapping it with > single quotes should resolve the issue, as follows: > > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv -d ';' > > > - Gabriel > > > On Thu, Oct 9, 2014 at 5:26 AM, [email protected] > wrote: > > Hi, > > > > My CSV file uses semicolon as field delimiter, I tried to use -d ; but > > failed. > > > > > > 1) without -d parameter > > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv > > 14/10/09 11:14:21 ERROR util.CSVCommonsLoader: Error upserting record > > [19;"SAUDI ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle > > silent permanent sauternes-- decoys hang slowly into the sentiments! forges > > toward"]: java.lang.NumberFormatException: For input string: "19;"SAUDI > > ARABIA";4;"fluffy close warthogs into the fluffy gifts kindle silent > > permanent sauternes-- decoys hang slowly into the sentiments! forges > > toward"" > > > > > > 2) with -d parameter > > ./psql.py z1:/hbase -t NATION ../sample/NATION.csv -d ; > > > > ... > > -d,--delimiter Field delimiter for CSV loader. A digit is > >interpreted as 1 -> ctrl A, 2 -> ctrl B ... > >9 -> ctrl I. > > -e,--escape-characterEscape character for CSV loader. A digit is > >interpreted as a control character > > > > > > > > CSV sample data: > > 0;"ARGENTINA";1;"ironic regular realms through the idly thin sauternes could > > eat boldly regular daring warthogs-- daringly idle somas could have to lo" > > 1;"BRAZIL";1;"silently quiet realms haggle boldly slow ruthless platelets? > > even i" > > 2;"CANADA";1;"fluffy pinto beans until the asymptotes doze slowly even > > epitaphs! doggedly busy excuses sublate carefully: quiet brave asymptotes > > boost sometimes on th" > > 9;"IRAN";4;"warthogs could poach even forges? bold bold attainments among > > the idly permanent warhorses are permanently in place of the bravely fu" > > 10;"IRAQ";4;"blithe excuses should have to believe; silent busy notornis > > print toward the slowly furious theodolites. even platelets serve bold > > ruthless tithes? shea" > > 11;"JAPAN";2;"dolphins can nag! enticingly bold warhorses will unwind never > > past the grouches; ironic quick s" > > > > > > Q: How to change default field delimiter from COMMA to SEMICOLON in psql.py > > command line? > > > > Regards > > Arthur > > > > > >
