This is an automated email from the ASF dual-hosted git repository.

oleewere pushed a commit to branch master
in repository

The following commit(s) were added to refs/heads/master by this push:
     new d0a46d8  MBARI-23456. Add cloud mode documentation (#66)
d0a46d8 is described below

commit d0a46d82304317c20c258355f7573a2442a025e3
Author: Olivér Sz <>
AuthorDate: Fri Jan 4 15:38:02 2019 +0100

    MBARI-23456. Add cloud mode documentation (#66)
 docs/ | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/docs/ b/docs/
index 731c6d2..f3fe5dd 100644
--- a/docs/
+++ b/docs/
@@ -15,6 +15,26 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
-### Log Feeder: cloud mode
+### Log Feeder: Cloud Mode
\ No newline at end of file
+Log Feeder is responsible to ship logs to cloud storage. In Log Search case, 
it is a search engine (Solr), and that can be used by Log Search server to 
visualize logs.
+Although there is a way to send logs to cloud storage / HDFS as well, of 
course in that case Solr is not used, so you won't be able to visualize the 
data. Log Feeder has 3 main modes:
+- `DEFAULT`: logs are shipped to Solr (or whatever is configured as output)
+- `CLOUD`: logs are shipped to Cloud storage or HDFS (logs as text files, or 
in json format)
+- `HYBRID`: logs are shipped to Solr and Cloud storage (or HDFS) as well 
(parallel mode) 
+The cloud mode can be set by `` property in 
+#### How it works ?
+In cloud mode, instead of shipping monitored logs to a specific location 
(directly) like into Solr, first it will ship the all monitored logs to new 
files to a temporal folder. Those logs are archived periodically, (you can 
check the configuration options about the log rollover in 
[]( with 
`*` prefixes) and in every minutes, there is a 
background thread that will try to upload those archived logs to HDFS or cloud 
+HDFS client is responsible to do the upload and you can provide a 
`core-site.xml` on the classpath (the HDFS client can be configured to use 
different filesystems like s3a,wasb,gcs etc.), but if you do not have any 
`core-site.xml` on the classpath, you can provide `fs.*` properties in 
`` configuration. 
+It is common, on specific clusters, there are both HDFS and cloud storage are 
used, so it is not valid to use the same filesystem that `core-site.xml` uses, 
that is why we need an option to override the filesystem valide 
(`fs.defaultFS`), that property is `` (e.g.: 
+Overall - as a minimum - it is enough to provide this 2 properties to enable 
cloud storage mode:
+- ``: CLOUD
+- ``: s3a://mybucket/apps/logsearch

Reply via email to