Hi Alex,

No luck.  I add that and also added the webapp where the war file is to the 
path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the 
url (http://localhost:35862/flumemaster.jsp).

Same 404.

Thanks,
Chalcy

-----Original Message-----
From: alo alt [mailto:wget.n...@googlemail.com] 
Sent: Tuesday, January 24, 2012 1:30 PM
To: flume-user@incubator.apache.org
Subject: Re: flume windows node - 404 on localhost

add flumemaster.jsp at the end.

- Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:

> Hello Flume users,
>  
> I am new to flume.  I have set up successfully a master and a node on two 
> linux virtual machines and they are collecting logs as expected.
>  
> Now I am trying to set up a windows flume node, followed the installation 
> guide etc., I could successfully run the node as a service.  When I go to the 
> port 35862, I get 404 like below.
>  
> Also I tried to start the node from windows command line, I get a status of 
> start pending.
>  
> Any help is appreciated.
>  
> Thanks,
> Chalcy
>  
> <image001.png>
>  
>  
>  
> From: Arvind Prabhakar [mailto:arv...@apache.org]
> Sent: Friday, January 13, 2012 12:26 PM
> To: flume-user@incubator.apache.org
> Subject: Re: Flume NG reliability and failover mechanisms
>  
> Hi Connolly,
>  
> Thanks for taking time to evaluate Flume NG. Please see my comments inline 
> below:
> 
> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani 
> <juhani_conno...@cyberagent.co.jp> wrote:
> Hi,
> 
> Coming into the new year we've been trying out flume NG, and run into 
> some questions. Tried to pick up what was possible from the javadoc 
> and source but pardon me if some of these are obvious.
> 
> 1) Reading 
> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flum
> e-ng-2/ describes the reliability, but what happens if we lose a node?
> 1.1)Presumably the data stored in its channel is gone?
>  
> It depends upon the kind of channel you have. If you use a memory channel, 
> the data will be gone. If you use file channel the data will be available. If 
> you use JDBC channel, it is guaranteed to be available.
>  
> 1.2) If we restart the node and the channel is a persisting one(file 
> or jdbc based),  will it then happily start feeding data into the sink?
>  
> Correct.
>  
> 
> 2) Is there some way to deliver data along multiple paths but make 
> sure it only gets persisted to a sink once? To avoid  loss of data to 
> a dying node.
>  
> We have talked about fail-over sink implementations. Although we don't have 
> it implemented yet, we do intend to provide these faciliteis.
>  
> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>  
> If you mean end-to-end reliable delivery guarantee, Flume NG already provides 
> that. You can get this by configuring your flow with reliable channels (JDBC).
>  
> 2.2) Anything else planned but further down along the horizon? Didn't 
> see much at 
> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Cas
> es but that doesn't look very up to date.
>  
> Most of the discussion is now moved to JIRA and dev-list. Features such as 
> channel multiplexing from same source, compatible source implementation for 
> hybrid installation of previous version of Flume and NG together, event 
> prioritization have been discussed among many others. As and when resources 
> permit, we will be addressing these going forward.
>  
> 
> 3) Using the hdfs sink, we're getting tons of really small files. I 
> suspect this is related to append, and having a poke around the 
> source, it turns out that append is only used(by  if 
> hdfs.append.support is set to true. The hdfs-default.xml name for this 
> variable is dfs.support.append . Is this intentional? Should we be 
> adding hdfs.append.support manually to our config, or is there 
> something else going on here(regarding all the tiny files)?
>  
> (Leaving this for Prasad who did the implementation of HDFS sink)
>  
> 
> Any help with these issues would be greatly appreciated.
>  
> Thanks,
> Arvind
>  


Reply via email to