Hi,
I want to build fuse-dfs from trunk, my command:
mvn package -DskipTests -Pnative,dist -Dtar
I've installed fuse-2.9.3 before the build:
./configure; make; make install; modprobe fuse
cp -a /usr/local/lib/libfuse.* /usr/lib64
but I find maven complain:
main:
[exec] -- checking
add dev.
-- Forwarded message --
From: hwpstorage hwpstor...@gmail.com
Date: Mar 7, 2014 11:38 PM
Subject: problem with HDFS caching in Hadoop 2.3
To: u...@hadoop.apache.org
Cc:
Hello,
It looks like the HDFS caching does not work well.
The cached log file is around 200MB. The
Hi Arun,
I just advice remove some sub tasks from blockers, then add Umbrella tasks,
such as:
RM HA - YARN-149
YARN Generic Application Timeline - YARN-1530
Generic application history service- YARN-321
Rolling upgrade(will be ready, no blockers currently, I just mention it) -
HDFS-5535
Hi,
Sorry ignore my inputs.
Just keep sub tasks as blocker, because the whole Umbrella tasks may not
block release.
On Fri, Mar 7, 2014 at 9:44 AM, Azuryy Yu azury...@gmail.com wrote:
Hi Arun,
I just advice remove some sub tasks from blockers, then add Umbrella
tasks, such as:
RM HA
+1 for merging.
On Feb 26, 2014 5:42 AM, Tsz Wo Sze szets...@yahoo.com wrote:
Hi hdfs-dev,
We propose merging the HDFS-5535 branch to trunk.
HDFS Rolling Upgrade is a feature to allow upgrading individual HDFS
daemons. In Hadoop v2, HDFS supports highly-available (HA) namenode
services
+1
It's very useful, Thanks for improve ACL features.
On Tue, Feb 11, 2014 at 8:46 AM, Chris Nauroth cnaur...@hortonworks.comwrote:
Hello everyone,
I would like to call a vote to merge HDFS ACLs from branch HDFS-4685 to
trunk.
HDFS ACLs provide support for finer-grained permissions on
Hi Todd,
I think Arpit's test method is incorrect. we cannot block port 8020 to
simulate active NN down. because ZK session is live and NN process is
running at the same time.
so when unblock 8020, NN1 think himself still is active.
On Sat, Feb 8, 2014 at 3:47 AM, Todd Lipcon
+1
This was very important improvement.
hope it release at 2.4
On Jan 31, 2014 6:37 AM, Haohui Mai h...@hortonworks.com wrote:
Hello all,
I would like to call a vote to merge of the new protobuf-based FSImage into
trunk.
The changes introduce a protobuf-based FSImage format into HDFS. It
Hi,
Merry Christmas firstly.
Do we have release mile stone for Hadoop-2.4? and importantly, doea that
the following features will be included?
a. Rolling upgrade HDFS
b. Heterogeneous storage API support
c: The whole central cache features
d: Serialize fsImage using protobuf
Thanks a lot.
questions, i'd like to extend to you the
opportunity to help the HDFS dev team to implement rolling updates
https://issues.apache.org/jira/browse/HDFS-5535
design reviews, code and tests welcome
-steve
On 21 November 2013 06:27, Azuryy Yu azury...@gmail.com wrote:
No. I don't do any
://wiki.apache.org/hadoop/Hadoop_Upgrade
https://twiki.grid.iu.edu/bin/view/Storage/HadoopUpgrade
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html#Upgrading_from_older_release_to_0.23_and_configuring_federation
From: Azuryy Yu azury...@gmail.com
Reply-To: u
Hi Dear,
I have a small test cluster with hadoop-2.0x, and HA configuraded, but I
want to upgrade to hadoop-2.2.
I dont't want to stop cluster during upgrade, so my steps are:
1) on standby NN: hadoop-dameon.sh stop namenode
2) remove HA configuration in the conf
3) hadoop-daemon.sh start
time.
-Original Message-
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: 21 November 2013 09:49
To: hdfs-dev@hadoop.apache.org; u...@hadoop.apache.org
Subject: HDFS upgrade problem of fsImage
Hi Dear,
I have a small test cluster with hadoop-2.0x, and HA configuraded, but I
want
Hi Milind,
HDFS federation can solve the NN bottle neck and memory limit problem.
AbstractNameSystem design sounds good. but distributed meta storage using
HBase should bring performance degration.
On Oct 4, 2013 3:18 AM, Milind Bhandarkar mbhandar...@gopivotal.com
wrote:
Hi All,
Exec
hi,
this is not HDFS issue.
you can put your question in the flume ml.
On Jul 19, 2013 3:20 PM, Divya R avyakr...@gmail.com wrote:
I'm running hadoop 1.2.0 and flume 1.3. Every thing works fine if its
independently run. When I start my tomcat I get the below exception after
some time.
,
Eitan
On Mon, Jul 8, 2013 at 2:15 PM, Allan wilsoncr...@gmail.com wrote:
If the imbalance is across data nodes then you need to run the balancer.
Sent from my iPad
On Jul 8, 2013, at 1:15 AM, Azuryy Yu azury...@gmail.com wrote:
Hi Dear all,
There are some unbalanced data nodes
other pointers would be helpful.
Thank you,
Eitan
On Mon, Jul 8, 2013 at 2:15 PM, Allan wilsoncr...@gmail.com wrote:
If the imbalance is across data nodes then you need to run the balancer.
Sent from my iPad
On Jul 8, 2013, at 1:15 AM, Azuryy Yu azury...@gmail.com wrote:
Hi
Hi Dear all,
There are some unbalanced data nodes in my cluster, some nodes reached more
than 95% disk usage.
so Can I move some block data from one node to another node directly?
such as: from n1 to n2:
1) scp /data//blk_* n2:/data/subdir11/
2) rm -rf data//blk_*
3) hadoop-dameon.sh
That's detailed. Thanks ATM.
On Tue, Jul 2, 2013 at 7:02 AM, Aaron T. Myers a...@cloudera.com wrote:
Hi Azuryy,
On Wed, Jun 26, 2013 at 6:12 PM, Azuryy Yu azury...@gmail.com wrote:
Hi Dear all,
I have some confusion for edit log retain,
NNStorageRetentionManager.java:
1
hi,
fot your first question, if you deploy QJM ha, it doesnt need share
highly-reliable sophisticated storage.
--Send from my Sony mobile.
On Jun 30, 2013 11:38 AM, Yonghwan Kim (JIRA) j...@apache.org wrote:
Yonghwan Kim created HDFS-4945:
--
Hi Dear all,
I have some confusion for edit log retain,
NNStorageRetentionManager.java:
1)
purgeCheckpointsOlderThan()
What's mean of check point here?
2)purgeOldStorage()
I cannot understand the calculation of minimum txid, I think I can
understand it if I know these keys indicates.
Hi dear All,
It's long type for the txid currently,
FSImage.java:
boolean loadFSImage(FSNamesystem target, MetaRecoveryContext recovery)
throws IOException{
editLog.setNextTxId(lastAppliedTxId + 1L);
}
Is it possible that (lastAppliedTxId + 1L) exceed Long.MAX_VALUE ?
are preventing reaching this number in the first place -
please do correct me if there is such a part).
On Tue, Jun 25, 2013 at 3:09 PM, Azuryy Yu azury...@gmail.com wrote:
Hi dear All,
It's long type for the txid currently,
FSImage.java:
boolean loadFSImage(FSNamesystem target
FsShell was rewrited in hadoop-2, The new FsShell has so many issuses, such
as copytolocal cannot keep permission, getmerge doesnt support derectory
target etc.
--Send from my Sony mobile.
On Jun 12, 2013 12:21 AM, Cristina L. Abad (JIRA) j...@apache.org wrote:
[
2.1.0-alpha is 2.0.5-beta actually.
--Send from my Sony mobile.
On Jun 1, 2013 8:14 AM, Ted Yu yuzhih...@gmail.com wrote:
I am currently testing HBase 0.95 using 2.0.5-SNAPSHOT artifacts.
Would 2.1.0-SNAPSHOT maven artifacts be available after tomorrow's change ?
Thanks
On Fri, May 31,
good suggestion.
On Wed, Apr 17, 2013 at 4:10 PM, Harsh J ha...@cloudera.com wrote:
Pardon my late inquisition here but since HBase already shipped out
with a name .snapshots/, why do we force them to change it, and not
rename HDFS' snapshots to use .hdfs-snapshots, given that HDFS
This was caused by HDFS-4666https://issues.apache.org/jira/browse/HDFS-4666,
committed yesterday.
IMO, it will be in a good place fianlly if HDFS and HBASE all changed
naming convention.
.hdfs_snapshot and .hbase_snapshot would be better.
On Tue, Apr 16, 2013 at 10:13 AM, Ted Yu
I think .hbase-sanpshot is good, but we should also disallow user to
create .hbase-sanpshot under hbase.root sub directories.
On Tue, Apr 16, 2013 at 11:18 AM, Ted Yu yuzhih...@gmail.com wrote:
I plan to rename .sanpshot in HBase to .hbase-sanpshot
Please suggest better name in future
/ShortCircuitLocalReads.apt.vm
best,
Colin
On Thu, Apr 11, 2013 at 6:37 PM, Azuryy Yu azury...@gmail.com wrote:
It's good to know HDFS-347 win the votes finally.
Does there need some additional configuration to enable these features?
On Fri, Apr 12, 2013 at 2:05 AM, Colin McCabe cmcc
Hadoop2.0.x supports dfs.block.replicator.classname, you can implement
your own replicator policy.
On Thu, Apr 11, 2013 at 2:07 PM, Mohammad Mustaqeem
3m.mustaq...@gmail.comwrote:
I want to make changes in replica placement.
Which Hadoop version is preferable to make changes in HDFS??
Please
Hadoopv2 support dfs.block.replicator.classname, you can implement your
replicate policy.
--Send from my Sony mobile.
On Apr 11, 2013 2:07 PM, Mohammad Mustaqeem 3m.mustaq...@gmail.com
wrote:
I want to make changes in replica placement.
Which Hadoop version is preferable to make changes in
It's good to know HDFS-347 win the votes finally.
Does there need some additional configuration to enable these features?
On Fri, Apr 12, 2013 at 2:05 AM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
The merge vote is now closed. With three +1s, it passes.
thanks,
Colin
On Wed, Apr 10,
Hi,
HDFS-4339 should be included, otherwise, HDFS cannot failover automatically.
On Thu, Apr 11, 2013 at 12:11 AM, Sangjin Lee sj...@apache.org wrote:
Hi Arun,
Would it be possible to include HADOOP-9407 in the release? It's been
resolved for a while, and it'd be good to have this
Sorry, don't include HDFS-4339, because
HDFS-4334https://issues.apache.org/jira/browse/HDFS-4334 is
fixed for 3.0.0.
On Thu, Apr 11, 2013 at 9:20 AM, Azuryy Yu azury...@gmail.com wrote:
Hi,
HDFS-4339 should be included, otherwise, HDFS cannot failover
automatically.
On Thu, Apr 11, 2013
name space, disk space. ns means block numbers limits. ds is total file
size limitation.
On Apr 4, 2013 3:12 PM, Bert Yuan bert.y...@gmail.com wrote:
Bellow is json format of an namednode entry:
{
inode:{
inodepath:'/anotherDir/biggerfile',
replication:3,
35 matches
Mail list logo