You could still use Harsh's solution programatically, or maybe an
easier way is to use HAUtil.getAddressOfActive() [1] for that? Ideally
we should not need to query ZK directly.
[1]
01182348739=o02nv4on4FXbhlijJ+R/KXvhooQ=";
> Path=/; Expires=Thu, 27-Jul-2017 19:05:48 GMT; HttpOnly
> Transfer-Encoding: chunked
>
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFo
Hi Cinyoung,
Concat has some restrictions, like the need for src file having last block size
to be the same as the configured dfs.block.size. If all the conditions are met,
below command example should work (where we are concatenating /user/root/file-2
into /user/root/file-1):
curl -i -X
Hi Yizhou,
Yes, this might be causing the failovers. I've seen situations where download
of large fsimage from SBNN, plus additional requests to ANN led to longer disk
latency, which caused any Service RPC request that require an HDFS WRITE LOCK
to take longer to be processed. This can cause
Hi Aneela,
All methods from DFS CLI are exposed in KMS HTTP REST API. Your java
application can then make http requests to KMS. Here is an example of
related http request format for creating a key:
POST http://HOST:PORT/kms/v1/keys
Content-Type: application/json
{
"name": "",
Hi Simone,
You should make sure to include hadoop-rumen-2.6.0.jar on the classpath for the
Nodemanagers, or include it on the classpath of your job.
> On 14 Dec 2015, at 09:56, siscia wrote:
>
> Hello folks,
>
> I am trying to run a simulation with GridMix but
-2.6.1.jar stax-api-1.0-2.jar
> hadoop-rumen-2.6.1.jar xmlenc-0.52.jar
> hadoop-sls-2.6.1.jar xz-1.0.jar
> hadoop-streaming-2.6.1.jar zookeeper-3.4.6.jar
>
> Am I doing something wrong ? How do I check the classpath of the
>
Hi, do u have below property on core-site.xml file used by your hdfs?
property
namehadoop.proxyuser.HTTP.hosts/name
value*/value
/property
property
namehadoop.proxyuser.HTTP.groups/name
value*/value
/property
Hello all,
We need to run several HTTPFS instances on our
If that doesn't work, u may need to define one entry for these properties
to each user running an httpfs instance.
See below:
http://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/ServerSetup.html
Em 03/06/2015 12:40, Wellington Chevreuil wellington.chevre...@gmail.com
escreveu:
Hi, do u
There might be some FATAL/ERROR/WARN or Exception messages in this log file
that can explain why NN process is dying. Can you paste some of the last lines
on the log file?
On 27 Apr 2015, at 09:37, Susheel Kumar Gadalay skgada...@gmail.com wrote:
jps listing is not showing namenode daemon.
Anand Murali
11/7, 'Anand Vihar', Kandasamy St, Mylapore
Chennai - 600 004, India
Ph: (044)- 28474593/ 43526162 (voicemail)
On Monday, April 27, 2015 2:46 PM, Wellington Chevreuil
wellington.chevre...@gmail.com wrote:
There might be some FATAL/ERROR/WARN or Exception messages
/ 43526162 (voicemail)
On Monday, April 27, 2015 4:16 PM, Wellington Chevreuil
wellington.chevre...@gmail.com wrote:
Hello Anand,
This error means NN could not find it's metadata directory. You probably
need to run hadoop namenode -format command before trying to start hdfs
Hello Anand,
Per your original email, this would be:
/home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-namenode-Latitude-E5540.out
Cheers.
On 27 Apr 2015, at 09:41, Anand Murali anand_vi...@yahoo.com wrote:
Susheel:
Since I am new to this, what log file should I look for in the log
You should have /etc/hosts properly configured on all your cluster nodes.
On 5 Aug 2014, at 07:28, S.L simpleliving...@gmail.com wrote:
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?
On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh
These indicates some lib versions conflicts - UnsupportedOperationException:
setXIncludeAware is not supported on this JAXP implementation or earlier: class
gnu.xml.dom.JAXPFactory
That classe is in gnujaxp jar. This chart api probably brought different
version for this lib, from the version
Hum, I'm not sure, but I think through the API, you have to create each folder
level at a time. For instance, if your current path is /user/logger and you
want to create /user/logger/dev2/tmp2, you have to first do hdfs.create(new
Path(/user/logger/dev2)), then hdfs.create(new
Can you make sure you still have enough HDFS space once you kill this DN? If
not, HDFS will automatically enter safemode if it detects there's no hdfs space
available. The error message on the logs should have some hints on this.
Cheers.
On 28 Jul 2014, at 16:56, Satyam Singh
You should not face any data loss. The replicas were just moved away from that
node to other nodes in the cluster during decommission. Once you recommission
the node and re-balance your cluster, HDFS will re-distribute replicas between
the nodes evenly, and the recommissioned node will receive
Hi,
there's no way to do that, as HDFS does not provide file updates features.
You'll need to write a new file with the changes.
Notice that even if you manage to find the physical block replica files on the
disk, corresponding to the part of the file you want to change, you can't
simply
Hi,
You should have proper core-site.xml, hdfs-site.xml and mapred-site.xml on your
classpath. These files should be available on /etc/hadoop/conf, so that
hadoop jar command would be able to load it.
Thanks,
Wellington.
On 16 Jul 2014, at 06:25, harish tangella harish.tange...@gmail.com
Hi Viswanathan,
this looks like your job history is full, and is filling up your jobtracker
heap:
2014-04-12 02:25:47,963 ERROR org.apache.hadoop.mapred.JobHistory: Unable to
move history file to DONE canonical subfolder.
java.lang.OutOfMemoryError: Java heap space
Have you set any value
Hi Victor,
if by replication you mean copy from one cluster to other, you can use the
distcp command.
Cheers.
On 28 Mar 2014, at 16:30, Serge Blazhievsky hadoop...@gmail.com wrote:
You mean replication between two different hadoop cluster or you just need
data to be replicated between two
Hi Reena,
the pipeline is per block. If you have half of your file in data node A only,
that means the pipeline had only one node (node A, in this case, probably
because replication factor is set to 1) and then, data node A has the checksums
for its block. The same applies to data node B.
Hi Kasa,
did you create the oozie user on the target ssh server, and does this have
all user rights to execute want it should on the target server?
Regards,
Wellington.
2013/8/12 Kasa V Varun Tej kasava...@gmail.com
Folks,
I have been working on this oozie SSH action from past 2 days. I'm
Can't you use flume for that?
2013/4/19 David Parks davidpark...@yahoo.com
I just realized another trick you might trying. The Hadoop dfs client can
read input from STDIN, you could use netcat to pipe the stuff across to
HDFS without hitting the hard drive, I haven’t tried it, but here’s
How about use a combiner to mark as dirty all rows from a dirty file, for
instance, putting dirty flag as part of the key, then in the reducer you
can simply ignore this rows and/or output the bad file name.
It still will have to pass through the whole file, but at least avoids the
case where you
Hi,
I think you'll have to implement your own custom FileInputFormat, using
this lib you mentioned to properly read your file records and split them
through map tasks.
Regards,
Wellington.
Em 23/02/2013 14:14, Public Network Services
publicnetworkservi...@gmail.com escreveu:
Hi...
I use an
Hi José,
I think your structure is ok to define HBase row keys. The main issue
you`ll have then is row you`ll be able to build these keys, so that you can
properly access your tree nodes.
Regarding your scalability concerns, you should not worry to start with a
small Hadoop/Hbase cluster (even
Hi,
can you tell us how are you trying to format your hdfs? As it´s a
NoClassDefFoundError, probably your hadoop lib is not on your
classpath.
Thanks,
Wellington.
2012/2/29 Marcos Ortiz mlor...@uci.cu:
On 03/01/2012 04:48 AM, raghavendhra rahul wrote:
Hi,
I tried to configure hadoop
Hi Harsh,
I had noticed that this ChainMapper belongs to the old version package
(org.apache.hadoop.mapred instead of org.apache.hadoop.mapreduce).
Although it takes generic Class types as it's method argument, is this
class able to work with Mappers from the new version package
Intermediate data from the map phase is written to disk by the Mapper.
After that, the data will be sent to Reducer(s) and it will perform 3
steps:
- shuffle: where all output data from mappers are sorted as input to
the Reducer(s);
- sort: output data from mappers are grouped by key. This is
Hey Sadak,
you don't need to write a MR job for that. You can make your java
program use Hadoop Java API for that. You would need to use FileSystem
(http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html)
and Path
32 matches
Mail list logo