2]) for Google Cloud Storage
> >>>> (GCS)
> >>>> which we have been building and maintaining for some time. After we
> >>>> clean
> >>>> up our code and tests to conform (to these[3] and other requirements)
> we
> >>>> would like to contribute it to Hadoop. We have many customers using
> the
> >>>> connector in high-throughput production Hadoop clusters; we¹d like to
> >>>> make
> >>>> it easier and faster to use Hadoop and GCS.
> >>>>
> >>>> Timeline:
> >>>> Presently, we are working on the beta of Google Cloud Dataproc[4]
> which
> >>>> limits our time a bit, so we¹re targeting late Q1 2016 for creating a
> >>>> JIRA
> >>>> issue and adapting our connector code as needed.
> >>>>
> >>>> Our (quick) questions:
> >>>> * Do we need to take any (non-coding) action for this beyond
> submitting
> >>>> a
> >>>> JIRA when we are ready?
> >>>> * Are there any up-front concerns or questions which we can (or will
> >>>> need
> >>>> to) address?
> >>>>
> >>>> Thank you!
> >>>>
> >>>> James Malone
> >>>> On behalf of the Google Big Data OSS Engineering Team
> >>>>
> >>>> Links:
> >>>> [1] -
> >>>>
> https://github.com/GoogleCloudPlatform/bigdata-interop/tree/master/gcs
> >>>> [2] - https://cloud.google.com/hadoop/google-cloud-storage-connector
> >>>> [3] -
> >>>>
> https://github.com/GoogleCloudPlatform/bigdata-interop/tree/master/gcs
> >>>> [4] - https://cloud.google.com/dataproc
> >>>
> >>
> >>
>
>
--
jay vyas
Also if they are general Hadoop big data examples were happy to carry them in
bigtop as well ... Especially if they touch multiple areas of the Hadoop
ecosystem
On Jun 23, 2015, at 11:56 PM, Andrew Wang andrew.w...@cloudera.com wrote:
Yea, throw them under dev-support. It'd be good to link
...@pandora.commailto:rha...@pandora.com
--
jay vyas
this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--
jay vyas
One easy place to contribute in small increments could be the reproducing of
bugs in jiras that are filed and open.
If every day you spent an hour reproducing a bug filed in a jira, you could
come up to speed eventually on a lot of sharp corners of the source code, and
probably contribute
Yup, that's a great summary. More details...
The HCFS wiki page will give you insight into some tests you can run to test
your FileSystem plugin class, which you will put in a jar file described below.
In general, hadoop apps are written to the file system interface which is
loaded runtime,
jay vyas created HADOOP-11251:
-
Summary: Confirm that all contract tests are run by RawLocalFS
Key: HADOOP-11251
URL: https://issues.apache.org/jira/browse/HADOOP-11251
Project: Hadoop Common
jay vyas created HADOOP-11072:
-
Summary: better Logging in DNS.java
Key: HADOOP-11072
URL: https://issues.apache.org/jira/browse/HADOOP-11072
Project: Hadoop Common
Issue Type: Improvement
these appear to be java errors related use to your jdk?
maybe your JDK doesnt match up well with your OS.
Consider trying red hat 6+ or Fedora 20?
On Jul 8, 2014, at 5:45 AM, moses.wang (JIRA) j...@apache.org wrote:
moses.wang created HADOOP-10795:
---
jay vyas created HADOOP-10723:
-
Summary: FileSystem deprecated filesystem name warning : Make
error message HCFS compliant
Key: HADOOP-10723
URL: https://issues.apache.org/jira/browse/HADOOP-10723
I think breaking backwards compat is sensible since It's easily caught by the
compiler and in this case where the alternative is a
Runtime error that can result in terabytes of mucked up output.
On May 29, 2014, at 6:11 AM, Matt Fellows matt.fell...@bespokesoftware.com
wrote:
As someone
?
Thanks !
--
Jay Vyas
http://jayunit100.blogspot.com
contains other jhist files (which *are*
recognized)?
Also I've created a jira for finer grained logging during the directoryScan(..)
operation: https://issues.apache.org/jira/browse/MAPREDUCE-5902
On May 22, 2014, at 1:37 PM, Jay Vyas jayunit...@gmail.com wrote:
(sorry, i meant THROW a NPE
response.setCounters(TypeConverter.toYarn(job.getAllCounters()));
224 return response;
225 }
--
Jay Vyas
http://jayunit100.blogspot.com
(sorry, i meant THROW a NPE, not return a null). Big difference of
course !
On Thu, May 22, 2014 at 1:36 PM, Jay Vyas jayunit...@gmail.com wrote:
Hi hadoop ... Is there a reason why line 220, below, should ever return
null when
being called through the code path of job.getCounters
Couple more questions:
- what is source vs. modules in steve's above outline?
- Should individual JIRAs be submitted to start doing this for segments of
the code, and if so at what granularity?
is strictly prohibited. If you have
received this communication in error, please contact the sender
immediately
and delete it from your system. Thank You.
--
Jay Vyas
http://jayunit100.blogspot.com
will be created in the dir, distribute
the krb5.conf and the keytabfile to you clients. config the cluents to pick
up the krb5.conf, you are done.
thx
Alejandro
(phone typing)
On Apr 17, 2014, at 8:28, Jay Vyas jayunit...@gmail.com wrote:
ah .. thats nice to know. so ... are there other
jay vyas created HADOOP-10505:
-
Summary: LinuxContainerExecutor is incompatible with Simple
Security mode.
Key: HADOOP-10505
URL: https://issues.apache.org/jira/browse/HADOOP-10505
Project: Hadoop Common
Slf4j is definetly a great step forward. Log4j is restrictive for complex and
multi tenant apps like hadoop.
Also the fact that slf4j doesn't use any magic when binding to its log provider
makes it way easier to swap out its implementation then tools of the past.
On Apr 10, 2014, at 2:16 AM,
jay vyas created HADOOP-10464:
-
Summary: Make TestTrash compatible with HADOOP-10461 .
Key: HADOOP-10464
URL: https://issues.apache.org/jira/browse/HADOOP-10464
Project: Hadoop Common
Issue Type
jay vyas created HADOOP-10463:
-
Summary: Bring RawLocalFileSystem test coverage to 100%
Key: HADOOP-10463
URL: https://issues.apache.org/jira/browse/HADOOP-10463
Project: Hadoop Common
Issue
jay vyas created HADOOP-10461:
-
Summary: Runtime DI based injector for FileSystem tests
Key: HADOOP-10461
URL: https://issues.apache.org/jira/browse/HADOOP-10461
Project: Hadoop Common
Issue
jay vyas created HADOOP-10405:
-
Summary: CLOVER coverage analysis for Hadoop-Commoon tests
Key: HADOOP-10405
URL: https://issues.apache.org/jira/browse/HADOOP-10405
Project: Hadoop Common
Issue
And also, if you want to help out: we are developing blueprints in the bigtop
project specifically for people who want to learn how real world bigdata
workflows look.
On Sep 24, 2013, at 4:52 AM, Steve Loughran ste...@hortonworks.com wrote:
Hi.
You need to know that we don't really
nadig ankitr...@gmail.com wrote:
thanks a lot!
On Tue, Sep 24, 2013 at 6:49 PM, Jay Vyas jayunit...@gmail.com wrote:
And also, if you want to help out: we are developing blueprints in the
bigtop project specifically for people who want to learn how real world
bigdata workflows look
/common/branches/branch-0.20/src/examples/org/apache/hadoop/examples/MultiFileWordCount.java
What should be the correct behaviour of getPos() in the RecordReader?
http://stackoverflow.com/questions/18708832/hadoop-rawlocalfilesystem-and-getpos
--
Jay Vyas
http://jayunit100.blogspot.com
27 matches
Mail list logo