Hi, the issue sometimes happens when hdfs has some issues under high
concurrent file write.

So to avoid this, please follow the best practices below:

1) Please configure your hdfs parameters correctly:
http://hdb.docs.pivotal.io/20/install/install-cli.html

2) Do not do high concurrent partitioned table load, esp, for
simultaneously load to all the partitions of a table. Since partitioned
table load open too many hdfs files at the same time. HDFS can be
overloaded.

3) Use resource queues to limit the concurrency of load jobs, or load
partitioned tables partition by partition.

4) Make sure your hdfs cluster is healthy, and have at least 3 replicas and
avoid too many data nodes down at the same time in the cluster.

Thanks
Lei


On Thu, Apr 14, 2016 at 3:56 PM, 来熊 <yin....@163.com> wrote:

> hi, all:
>  I am using hawq 1.3.1 .I got a problem these days:
>
> when I load data to table ,I got an error like this:
> append only storage write could not open segment file
> hdfs://test-1:8020/gpsql/gpseg7/16385/16561/16794.129 for relation 'caca'
> device or resource busy
> (cdbappendonlystoragewrite.c 397)  ...(cdbdisp.c 1574)
> DETAIL:
> Failed to APPEND_FILE /gpsql/gpseg7/1685/16561/16794.129 for
> libhdfs3_client_random_130556199_count_1_tid_140xxxxxxx on test-2
> because another recovery is in progress by HDFS_NameNode on test-2.
>
> when I query data from table : select * from caca limit 1;
> I got an another error:
> read beyond eof in table caca (cdbbufferedread.c 201) ... (cdbdisp.c
> 1574)..
>
> anyone have meet this? Please help.
>
>
>
>
>
>
>

Reply via email to