https://issues.cloudera.org/browse/IMPALA-4421

On Wed, Nov 2, 2016 at 6:46 AM, Laszlo Gaal <[email protected]> wrote:
> Hi Amos,
>
> I've encountered this malformed directoy name as well. The quick workaround
> is to delete it and create the subdir with the correct name:
> 1. rm -rf  ${IMPALA_HOME}/tests/*RESUL*
> 2. mkdir ${IMPALA_HOME}/tests/results
>
> This will create the correct subdirectory under tests.
>
> Hope this helps,
>
>     - LaszloG
>
> On Wed, Nov 2, 2016 at 2:02 PM, Amos Bird <[email protected]> wrote:
>
>>
>> Ah, re-login does the trick. Thanks for you help ;).
>>
>> However, the e2e test yells so many errors.
>>
>> 1) the name of the directory containing the error log is strange. It
>>  literaly looks like this:
>> tests/"${RESULTS_DIR}/TEST-impala-custom-cluster.log"
>>
>> 2) the commit I tested is 7fc31b534d4c5cb118c559e16556a6c1ae6ca7fc
>>
>> 3) when executing tests/run-tests.py, it gave:
>> -----
>> Traceback (most recent call last):
>>   File "./tests/run-tests.py", line 94, in <module>
>>     test_executor.run_tests(args)
>>   File "./tests/run-tests.py", line 63, in run_tests
>>     exit_code = pytest.main(args)
>>   File "/home/amos/impala/infra/python/env/local/lib/python2.
>> 7/site-packages/_pytest/config.py", line 32, in main
>>     config = _prepareconfig(args, plugins)
>>   File "/home/amos/impala/infra/python/env/local/lib/python2.
>> 7/site-packages/_pytest/config.py", line 78, in _prepareconfig
>>     args = shlex.split(args)
>>   File "/usr/lib/python2.7/shlex.py", line 279, in split
>>     return list(lex)
>>   File "/usr/lib/python2.7/shlex.py", line 269, in next
>>     token = self.get_token()
>>   File "/usr/lib/python2.7/shlex.py", line 96, in get_token
>>     raw = self.read_token()
>>   File "/usr/lib/python2.7/shlex.py", line 172, in read_token
>>     raise ValueError, "No closing quotation"
>> ValueError: No closing quotation
>> -----
>>
>> 4) when executing "MAX_PYTEST_FAILURES=12345678 ./bin/run-all-tests.sh",
>> be, fe tests are passed. e2e tests fail a lot. Log files are attached.
>>
>> I'm refering to this https://cwiki.apache.org/
>> confluence/display/IMPALA/How+to+load+and+run+Impala+tests
>>
>> regards,
>> Amos
>>
>>
>>
>> Lars Volker writes:
>>
>> > Yes, this is already committed to the impala-setup repo and I used it
>> > yesterday on a fresh Ubuntu 14.04 machine with success.
>> >
>> > Amos, after running impala-setup you will need to re-login to make sure
>> the
>> > changes made to the system limits are effective. You can check them by
>> > running "ulimit -n" in your shell.
>> >
>> > On Wed, Nov 2, 2016 at 5:48 AM, Jim Apple <[email protected]> wrote:
>> >
>> >> Isn't that already part of the script?
>> >>
>> >> https://github.com/awleblang/impala-setup/commit/
>> >> 56fa829c99e997585eb63fcd49cb65eb8357e679
>> >>
>> >> https://git-wip-us.apache.org/repos/asf?p=incubator-impala.
>> >> git;a=blob;f=bin/bootstrap_development.sh;h=
>> 8c4f742ae058f8017858d2a749e882
>> >> 4be58bd410;hb=HEAD#l68
>> >>
>> >> On Tue, Nov 1, 2016 at 9:44 PM, Dimitris Tsirogiannis
>> >> <[email protected]> wrote:
>> >> > Hi Amos,
>> >> >
>> >> > You need to increase your limits (/etc/security/limits.conf) for max
>> >> > number of open files (nofile). Use a pretty big number (e.g. 500K) for
>> >> > both soft and hard.
>> >> >
>> >> > Hope that helps.
>> >> >
>> >> > Dimitris
>> >> >
>> >> > On Tue, Nov 1, 2016 at 8:57 PM, Amos Bird <[email protected]> wrote:
>> >> >>
>> >> >> Hi there,
>> >> >>
>> >> >> After days of efforts to make impala's local tests work on my Centos
>> >> >> machine, I finally gave up and turns to Ubuntu. I followed this
>> simple
>> >> >> guide
>> >> >> https://cwiki.apache.org/confluence/display/IMPALA/
>> >> Bootstrapping+an+Impala+Development+Environment+From+Scratch
>> >> >> on a fresh installed Ubuntu 14.04. Unfortunately there are still
>> errors
>> >> >> in loading data phase. Here is the error log,
>> >> >>
>> >> >> ------------------------------------------------------------
>> >> ---------------------------------
>> >> >> Loading Kudu TPCH (logging to /home/amos/impala/logs/data_
>> loading/load-kudu-tpch.log)...
>> >> FAILED
>> >> >> 'load-data tpch core kudu/none/none force' failed. Tail of log:
>> >> >> distribute by hash (c_custkey) into 9 buckets stored as kudu
>> >> >>
>> >> >> (load-tpch-core-impala-generated-kudu-none-none.sql):
>> >> >>
>> >> >>
>> >> >> Executing HBase Command: hbase shell load-tpch-core-hbase-
>> >> generated.create
>> >> >> 16/11/02 01:07:58 INFO Configuration.deprecation: hadoop.native.lib
>> is
>> >> deprecated. Instead, use io.native.lib.available
>> >> >> SLF4J: Class path contains multiple SLF4J bindings.
>> >> >> SLF4J: Found binding in [jar:file:/home/amos/impala/
>> >> toolchain/cdh_components/hbase-1.2.0-cdh5.10.0-
>> >> SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/
>> >> StaticLoggerBinder.class]
>> >> >> SLF4J: Found binding in [jar:file:/home/amos/impala/
>> >> toolchain/cdh_components/hadoop-2.6.0-cdh5.10.0-
>> >> SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/
>> org/slf4j/impl/
>> >> StaticLoggerBinder.class]
>> >> >> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> >> explanation.
>> >> >> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >> >> Executing HBase Command: hbase shell post-load-tpch-core-hbase-
>> >> generated.sql
>> >> >> 16/11/02 01:08:03 INFO Configuration.deprecation: hadoop.native.lib
>> is
>> >> deprecated. Instead, use io.native.lib.available
>> >> >> SLF4J: Class path contains multiple SLF4J bindings.
>> >> >> SLF4J: Found binding in [jar:file:/home/amos/impala/
>> >> toolchain/cdh_components/hbase-1.2.0-cdh5.10.0-
>> >> SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/
>> >> StaticLoggerBinder.class]
>> >> >> SLF4J: Found binding in [jar:file:/home/amos/impala/
>> >> toolchain/cdh_components/hadoop-2.6.0-cdh5.10.0-
>> >> SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/
>> org/slf4j/impl/
>> >> StaticLoggerBinder.class]
>> >> >> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> >> explanation.
>> >> >> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> >> >> Invalidating Metadata
>> >> >> (load-tpch-core-impala-load-generated-kudu-none-none.sql):
>> >> >> INSERT INTO TABLE tpch_kudu.lineitem SELECT * FROM tpch.lineitem
>> >> >>
>> >> >> Data Loading from Impala failed with error: ImpalaBeeswaxException:
>> >> >>  Query aborted:
>> >> >> Kudu error(s) reported, first error: Timed out: Failed to write batch
>> >> of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557 after 329
>> >> attempt(s): Failed to write to server: (no server available):
>> Write(tablet:
>> >> 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708, num_attempts: 329)
>> >> passed its deadline: Network error: recv error: Connection reset by peer
>> >> (error 104)
>> >> >>
>> >> >>
>> >> >>
>> >> >> Kudu error(s) reported, first error: Timed out: Failed to write batch
>> >> of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557 after 329
>> >> attempt(s): Failed to write to server: (no server available):
>> Write(tablet:
>> >> 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708, num_attempts: 329)
>> >> passed its deadline: Network error: recv error: Connection reset by peer
>> >> (error 104)
>> >> >> Error in Kudu table 'impala::tpch_kudu.lineitem': Timed out: Failed
>> to
>> >> write batch of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557
>> after
>> >> 329 attempt(s): Failed to write to server: (no server available):
>> >> Write(tablet: 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708,
>> >> num_attempts: 329) passed its deadline: Network error: recv error:
>> >> Connection reset by peer (error 104) (1 of 2708 similar)
>> >> >>
>> >> >> Traceback (most recent call last):
>> >> >>   File "/home/amos/impala/bin/load-data.py", line 158, in
>> >> exec_impala_query_from_file
>> >> >>     result = impala_client.execute(query)
>> >> >>   File "/home/amos/impala/tests/beeswax/impala_beeswax.py", line
>> 173,
>> >> in execute
>> >> >>     handle = self.__execute_query(query_string.strip(), user=user)
>> >> >>   File "/home/amos/impala/tests/beeswax/impala_beeswax.py", line
>> 339,
>> >> in __execute_query
>> >> >>     self.wait_for_completion(handle)
>> >> >>   File "/home/amos/impala/tests/beeswax/impala_beeswax.py", line
>> 359,
>> >> in wait_for_completion
>> >> >>     raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
>> >> >> ImpalaBeeswaxException: ImpalaBeeswaxException:
>> >> >>  Query aborted:
>> >> >> Kudu error(s) reported, first error: Timed out: Failed to write batch
>> >> of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557 after 329
>> >> attempt(s): Failed to write to server: (no server available):
>> Write(tablet:
>> >> 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708, num_attempts: 329)
>> >> passed its deadline: Network error: recv error: Connection reset by peer
>> >> (error 104)
>> >> >>
>> >> >>
>> >> >>
>> >> >> Kudu error(s) reported, first error: Timed out: Failed to write batch
>> >> of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557 after 329
>> >> attempt(s): Failed to write to server: (no server available):
>> Write(tablet:
>> >> 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708, num_attempts: 329)
>> >> passed its deadline: Network error: recv error: Connection reset by peer
>> >> (error 104)
>> >> >> Error in Kudu table 'impala::tpch_kudu.lineitem': Timed out: Failed
>> to
>> >> write batch of 2708 ops to tablet 84aa134fb6c24916aa16cf50f48ec557
>> after
>> >> 329 attempt(s): Failed to write to server: (no server available):
>> >> Write(tablet: 84aa134fb6c24916aa16cf50f48ec557, num_ops: 2708,
>> >> num_attempts: 329) passed its deadline: Network error: recv error:
>> >> Connection reset by peer (error 104) (1 of 2708 similar)
>> >> >>
>> >> >> Error in /home/amos/impala/testdata/bin/create-load-data.sh at line
>> >> 45: while [ -n "$*" ]
>> >> >> + cleanup
>> >> >> + rm -rf /tmp/tmp.hMzGwIcUo3
>> >> >> ------------------------------------------------------------
>> >> ---------------------------------
>> >> >>
>> >> >> This kinda blocks my patch's rebasing. Any help is much appreciated!
>> >> >>
>> >> >> regards,
>> >> >> Amos
>> >>
>>
>>

Reply via email to