The README should document where in HDFS the data ends up, and it should also document how to inspect that data (-copyToLocal + -text for avro serialization, and -cat for text serialization). Also, as we discussed, split the data into separate directories per remote service (or perhaps even unit), or at least filenames, to make it more clear where the data came from.
-- You received this bug notification because you are a member of Juju Big Data Development, which is a bug assignee. https://bugs.launchpad.net/bugs/1478770 Title: New charm: apache-flume-hdfs Status in Juju Charms Collection: New Bug description: This charm provides a Flume agent designed to ingest events into the shared filesystem (HDFS) of a connected Hadoop cluster. It is meant to relate to other Flume agents such as apache-flume-syslog and apache- flume-twitter. Big Data charmers need to review the current state of this charm and verify its readiness for the charm store. To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+bug/1478770/+subscriptions -- Mailing list: https://launchpad.net/~bigdata-dev Post to : [email protected] Unsubscribe : https://launchpad.net/~bigdata-dev More help : https://help.launchpad.net/ListHelp

