[
https://issues.apache.org/jira/browse/HDFS-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nandakumar updated HDFS-12071:
------------------------------
Description:
Tool to populate ozone with data for testing.
This is not a map-reduce program and this is not for benchmarking Ozone write
throughput.
It supports both online and offline modes. Default mode is offline, {{-mode}}
can be used to change the mode.
In online mode, active internet connection is required, common crawl data from
AWS will be used. Default source is [CC-MAIN-2017-17/warc.paths.gz |
https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2017-17/warc.paths.gz]
(it contains the path to actual data segment), user can override this using
{{-source}}.
The following values are derived from URL of Common Crawl data
* Domain will be used as Volume
* URL will be used as Bucket
* FileName will be used as Key
In offline mode, the data will be random bytes and size of data will be 10 KB.
* Default number of Volumes 10, {{-numOfVolumes}} can be used to override
* Default number of Buckets per Volume 1000, {{-numOfBuckets}} can be used to
override
* Default number of Keys per Bucket 500000, {{-numOfKeys}} can be used to
override
was:
Tool to populate ozone with data for testing.
This is not a map-reduce program and this is not for benchmarking Ozone write
throughput.
It supports both online and offline modes. Default mode is offline, {{-mode}}
can be used to change the mode.
In online mode, active internet connection is required, common crawl data from
AWS will be used. Default source is
https://commoncrawl.s3.amazonaws.com/CC-MAIN-2017-17/warc.paths.gz (it contains
the path to actual data segment), user can override this using {{-source}}.
The following values are derived from URL of Common Crawl data
* Domain will be used as Volume
* URL will be used as Bucket
* FileName will be used as Key
In offline mode, the data will be random bytes and size of data will be 10 KB.
* Default number of Volumes 10, {{-numOfVolumes}} can be used to override
* Default number of Buckets per Volume 1000, {{-numOfBuckets}} can be used to
override
* Default number of Keys per Bucket 500000, {{-numOfKeys}} can be used to
override
> Ozone: Corona: Implementation of Corona
> ---------------------------------------
>
> Key: HDFS-12071
> URL: https://issues.apache.org/jira/browse/HDFS-12071
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone
> Reporter: Nandakumar
> Assignee: Nandakumar
>
> Tool to populate ozone with data for testing.
> This is not a map-reduce program and this is not for benchmarking Ozone write
> throughput.
> It supports both online and offline modes. Default mode is offline, {{-mode}}
> can be used to change the mode.
>
> In online mode, active internet connection is required, common crawl data
> from AWS will be used. Default source is [CC-MAIN-2017-17/warc.paths.gz |
> https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2017-17/warc.paths.gz]
> (it contains the path to actual data segment), user can override this using
> {{-source}}.
> The following values are derived from URL of Common Crawl data
> * Domain will be used as Volume
> * URL will be used as Bucket
> * FileName will be used as Key
>
> In offline mode, the data will be random bytes and size of data will be 10 KB.
> * Default number of Volumes 10, {{-numOfVolumes}} can be used to override
> * Default number of Buckets per Volume 1000, {{-numOfBuckets}} can be used to
> override
> * Default number of Keys per Bucket 500000, {{-numOfKeys}} can be used to
> override
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]