Interestingly enough,
I think I can create new wiki, but I don't see a button to edit existing
wiki.
On Thu, Aug 15, 2019 at 6:24 AM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:
> Hi Wei-Chiu Chuang,
>
> Thanks for doing this and sorry for late reply.
>
> HowToCommit[1] wiki has
Hi Wangda,
Thanks for bringing this up.
> I think it is the time to do maintenance releases of 3.1/3.2 and do a
minor
> release for 3.3.0.
3.3.0 seems to have some blocker issues.
project in ("Hadoop Common", "Hadoop HDFS", "Hadoop Map/Reduce",
"Hadoop YARN") AND "Target Version/s" =
Hi Wei-Chiu Chuang,
Thanks for doing this and sorry for late reply.
HowToCommit[1] wiki has description about how to do this in "Adding
Contributors role" section but it does not explicitly say when it
should be done.
[1] https://cwiki.apache.org/confluence/display/HADOOP2/HowToCommit
This
liying created HDFS-14738:
-
Summary: Reading and writing HDFS data through HDFS nfs3 server
often getting stuck
Key: HDFS-14738
URL: https://issues.apache.org/jira/browse/HDFS-14738
Project: Hadoop HDFS
xuzq created HDFS-14739:
---
Summary: RBF: LS command for mount point shows wrong owner and
permission information.
Key: HDFS-14739
URL: https://issues.apache.org/jira/browse/HDFS-14739
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDDS-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xudong Cao resolved HDDS-1703.
--
Resolution: Invalid
> Freon uses wait/notify instead of polling to eliminate the test result errors.
>
For more details, see
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/414/
[Aug 14, 2019 3:35:14 AM] (iwasakims) HDFS-14423. Percent (%) and plus (+)
characters no longer work in
[Error replacing 'FILE' - Workspace is not accessible]
liying created HDFS-14737:
-
Summary: Writing data through the HDFS nfs3 service is very slow,
and timeout occurs while mount directories on the nfs client
Key: HDFS-14737
URL:
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/
[Aug 14, 2019 3:24:03 AM] (github) HADOOP-16495. Fix invalid metric types in
PrometheusMetricsSink (#1244)
[Aug 14, 2019 3:27:37 AM] (aengineer) HDDS-1920. Place ozone.om.address config
key default value
Bharat Viswanadham created HDDS-1974:
Summary: Implement OM CancelDelegationToken request to use Cache
and DoubleBuffer
Key: HDDS-1974
URL: https://issues.apache.org/jira/browse/HDDS-1974
Bharat Viswanadham created HDDS-1973:
Summary: Implement OM RenewDelegationToken request to use Cache
and DoubleBuffer
Key: HDDS-1973
URL: https://issues.apache.org/jira/browse/HDDS-1973
Project:
Siyao Meng created HDDS-1971:
Summary: Document Ozone fs shell command works without explicitly
specifying a default port
Key: HDDS-1971
URL: https://issues.apache.org/jira/browse/HDDS-1971
Project:
Bharat Viswanadham created HDDS-1972:
Summary: Provide example ha proxy with multiple s3 servers back
end.
Key: HDDS-1972
URL: https://issues.apache.org/jira/browse/HDDS-1972
Project: Hadoop
Feilong He created HDFS-14740:
-
Summary: HDFS read cache persistence support
Key: HDFS-14740
URL: https://issues.apache.org/jira/browse/HDFS-14740
Project: Hadoop HDFS
Issue Type: Improvement
Dear Submarine developers,
My name is Xun Liu, I am a member of the Hadoop submarine development team.
I'm one of the major contributor of Submarine since June 2018.
I want to hear your thoughts about creating a separate GitHub repo under
Apache to do submarine development. This is an
15 matches
Mail list logo