Re: Individual Parallelism support for Flink Runner

2020-06-29 Thread amit kumar
Looks like

https://ci.apache.org/projects/flink/flink-docs-stable/dev/parallel.html#operator-level

Regards,
Amit

On Mon, Jun 29, 2020 at 12:59 PM Kenneth Knowles  wrote:

> This exact issue has been discussed before, though I can't find the older
> threads. Basically, specifying parallelism is a workaround (aka a cost),
> not a feature (aka a benefit). Sometimes you have to pay that cost as it is
> the only solution currently understood or implemented. It depends on what
> your reason is for having to set parallelism.
>
> A lot of the time, the parallelism is a property of the combination of the
> pipeline and the data. The same pipeline with different data should have
> this tuned differently. For composite transforms in a library (not the top
> level pipeline) this is even more likely. It sounds like the suggestions
> here fit this case.
>
> Some of the time, max parallelism has to do with not overwhelming another
> service. This depends on the particular endpoint. That is usually
> construction-time information. In this case you want to have portable
> mandatory limits.
>
> Could you clarify your use case?
>
> Kenn
>
> On Mon, Jun 29, 2020 at 8:58 AM Luke Cwik  wrote:
>
>> Check out this thread[1] about adding "runner determined sharding" as a
>> general concept. This could be used to enhance the reshuffle implementation
>> significantly and might remove the need for per transform parallelism from
>> that specific use case and likely from most others.
>>
>> 1:
>> https://lists.apache.org/thread.html/rfd1ca93268eb215fbbcfe098c1dfb330f1b84fb89673325135dfd9a8%40%3Cdev.beam.apache.org%3E
>>
>> On Mon, Jun 29, 2020 at 4:03 AM Maximilian Michels 
>> wrote:
>>
>>> We could allow parameterizing transforms by using transform identifiers
>>> from the pipeline, e.g.
>>>
>>>
>>>options = ['--parameterize=MyTransform;parallelism=5']
>>>with Pipeline.create(PipelineOptions(options)) as p:
>>>  p | Create(1, 2, 3) | 'MyTransform' >> ParDo(..)
>>>
>>>
>>> Those hints should always be optional, such that a pipeline continues to
>>> run on all runners.
>>>
>>> -Max
>>>
>>> On 28.06.20 14:30, Reuven Lax wrote:
>>> > However such a parameter would be specific to a single transform,
>>> > whereas maxNumWorkers is a global parameter today.
>>> >
>>> > On Sat, Jun 27, 2020 at 10:31 PM Daniel Collins >> > > wrote:
>>> >
>>> > I could imagine for example, a 'parallelismHint' field in the base
>>> > parameters that could be set to maxNumWorkers when running on
>>> > dataflow or an equivalent parameter when running on flink. It would
>>> > be useful to get a default value for the sharding in the Reshuffle
>>> > changes here https://github.com/apache/beam/pull/11919, but more
>>> > generally to have some decent guess on how to best shard work. Then
>>> > it would be runner-agnostic; you could set it to something like
>>> > numCpus on the local runner for instance.
>>> >
>>> > On Sat, Jun 27, 2020 at 2:04 AM Reuven Lax >> > > wrote:
>>> >
>>> > It's an interesting question - this parameter is clearly very
>>> > runner specific (e.g. it would be meaningless for the Dataflow
>>> > runner, where parallelism is not a static constant). How should
>>> > we go about passing runner-specific options per transform?
>>> >
>>> > On Fri, Jun 26, 2020 at 1:14 PM Akshay Iyangar
>>> > mailto:aiyan...@godaddy.com>> wrote:
>>> >
>>> > Hi beam community,
>>> >
>>> > __ __
>>> >
>>> > So I had brought this issue in our slack channel but I
>>> guess
>>> > this warrants a deeper discussion and if we do go about
>>> what
>>> > is the POA for it.
>>> >
>>> > __ __
>>> >
>>> > So basically currently for Flink Runner we don’t support
>>> > operator level parallelism which native Flink provides
>>> OOTB.
>>> > So I was wondering what the community feels about having
>>> > some way to pass parallelism for individual operators esp.
>>> >   for some of the existing IO’s 
>>> >
>>> > __ __
>>> >
>>> > Wanted to know what people think of this.
>>> >
>>> > __ __
>>> >
>>> > Thanks 
>>> >
>>> > Akshay I
>>> >
>>>
>>


Re: DynamicMessage in protobufs for re-usable beam pipelines

2020-06-17 Thread amit kumar
Thanks Brian for your response, Alex's code is very helpful. For my current
case I will use reflections to get default instance types of proto messages.
Another way I think would be to de-couple converters and sinks to bypass
this issue and do some of the conversions inside DoFn.

Regards,
Amit

On Mon, Jun 15, 2020 at 11:28 AM Brian Hulette  wrote:

> I don't think I can help with your specific issue, but I can point you to
> some potentially useful code. +Alex Van Boxel  was
> working on a very similar strategy and added a lot of code for mapping
> protobufs to Beam schemas which you may be able to take advantage of. He
> added options to Beam schemas [1], and the ability to map protobuf options
> to schema options. He also added schema support for dynamic messages in [2].
>
> Brian
>
> [1]
> https://cwiki.apache.org/confluence/display/BEAM/%5BBIP-1%5D+Beam+Schema+Options
> [2] https://github.com/apache/beam/pull/10502
>
> On Mon, Jun 15, 2020 at 1:15 AM amit kumar  wrote:
>
>> Hi,
>>
>>
>> I intend to use Protobuf options to trigger different transforms and use
>> metadata from storage proto options for sink partitioning  etc.. and
>> also allow different protobuf message types flowing via the same pipeline,
>> running as different instances of the pipeline.
>>
>> I am able to parse descriptors, fields and options from file descriptors
>> compiled externally to the beam pipeline jar.
>>
>>
>> I am not able to use dynamicMessage.getDefaultInstanceForType() in the
>> Sink transforms PTransform, PDone> which need a
>> defaultInstance of the message type to persist the data since it throws
>> com.google.protobuf.DynamicMessage not Serializable.
>>
>> I wanted to check if there is a way to use a generic proto in a beam
>> pipeline and if there are any examples of protobuf reflection which can be
>> used in this case or if there is any recommended way to achieve this
>> functionality.
>>
>>
>>
>> Many Thanks,
>>
>> Amit
>>
>


DynamicMessage in protobufs for re-usable beam pipelines

2020-06-15 Thread amit kumar
Hi,


I intend to use Protobuf options to trigger different transforms and use
metadata from storage proto options for sink partitioning  etc.. and also
allow different protobuf message types flowing via the same pipeline,
running as different instances of the pipeline.

I am able to parse descriptors, fields and options from file descriptors
compiled externally to the beam pipeline jar.


I am not able to use dynamicMessage.getDefaultInstanceForType() in the Sink
transforms PTransform, PDone> which need a defaultInstance
of the message type to persist the data since it throws
com.google.protobuf.DynamicMessage not Serializable.

I wanted to check if there is a way to use a generic proto in a beam
pipeline and if there are any examples of protobuf reflection which can be
used in this case or if there is any recommended way to achieve this
functionality.



Many Thanks,

Amit


Runner dependent sharding for dynamic destinations in FileIO

2020-05-08 Thread amit kumar
Hi Everyone,

We use FileIO's writeDynamic to write dynamically to separate groups based
on an attribute's value in the input PCollection.
I wanted to check if there is a way to make sharding as runner dependent?


Many thanks,
Amit


Re: Default WindowFn for Unbounded source

2020-04-02 Thread amit kumar
Thank you all!
your responses are very helpful.

On Wed, Apr 1, 2020 at 11:37 AM Robert Bradshaw  wrote:

>
>
> On Wed, Apr 1, 2020 at 12:53 AM Jan Lukavský  wrote:
>
>> Hi Amit,
>>
>> answers inline.
>> On 4/1/20 12:23 AM, amit kumar wrote:
>>
>> Thanks Ankur for your reply.
>>
>> By default the allowed lateness for a global window is zero but we can
>> also set  it to be non-zero which will be used in the downstream transforms
>> where group by or window into with trigger is happening ?
>>  (using allowedTimeStampSkew for unbounded sources/ sources which have
>> timestamped elements).
>>
>> Setting allowedLateness for global window has no semantic meaning,
>> because global window will be triggered (using default trigger) only at the
>> end of input. Allowed lateness plays no role in that for global window.
>>
>> allowedTimestampSkew is used for something different, it is used when you
>> reassign timestamps to elements which already have timestamps (e.g.
>> assigned by source) and you want to move them into past. The skew says how
>> far in the past you can go.
>>
>>
>> In both scenarios which I described earlier for *source transforms* is
>> it possible that the pipeline will drop data if I do not
>> specify allowedTimeStampSkew/ allowedLateness at the source
>> transforms(given I have late arriving data)? Can I just set allowed
>> lateness in the transform where I do groupBy or windowInto rather than
>> source.
>>
>> AllowedLateness is parameter of stateful operation (e.g. GroupByKey) not
>> the source. The source emits _watermarks_, which marks progress in event
>> time, but the data is then handled in the stateful operator. Each operator
>> can have its own allowedLateness (although the model ensures that the
>> lateness is by default inherited from one operator to the other). Sources
>> should simply assign elements to global windows (with no allowed lateness,
>> as allowed lateness has no meaning for global windows as mentioned above).
>>
>>
>> In case of TextIO.read which reads from a bounded source and I assign
>> Timestamps to all elements in the second transform, will it be useful in
>> this case as well to set allowedTimeStampSkew after assigning timestamps? I
>> am trying to understand how the elements will be available after assigning
>> timestamps (Given all files are present on file system), will they be
>> ordered by timestamp, can some elements be read after watermark has
>> progressed above an element's event time  ?
>>
>> When executing batch pipeline, there is actually no watermark. Event time
>> moves discretely from -inf (computation not finished yet) to +inf
>> (computation finished). In the case you describe, you should not even need
>> to set allowedTimestampSkew, because elements output from TextIO should
>> (probably) be assigned timestamp of BoundedWindow.TIMESTAMP_MIN_VALUE (I'm
>> not sure if the model guarantees this, but it seems reasonable). You can
>> then reassign timestamps to the future as you wish. You don't have to worry
>> about allowed lateness either, because that only applies to streaming
>> pipelines, where event time moves more smoothly. By the definition of how
>> event time progresses in case of batch pipelines, there is no "late" (after
>> watermark) data in this case.
>>
>
> Clarification: sources should assign elements to their upstream window
> (similar to DoFns), generally with the appropriate timestamp (unless they
> are timestamp aware). The upstream of a bounded source is typically
> Impulse, which is in the global window with MIN_TIMESTAMP, but could be
> different. This better unifies the case of reading the elements from a set
> of filenames published to pubsub, for example.
>
>>
>>
>> TextIO.Read.
>>  |. Bounded source
>>  |. Global Window
>>  |.  -infinity watermark
>> apply
>> WithTimeStamps (Based on a timestamp attribute in file)
>>|.   timestamped elements (watermark starts from -infinity and follows
>> the timestamp from timestamp attribute)
>>    |.   Global Window
>>
>>
>> Regards,
>> Amit
>>
>> On Tue, Mar 31, 2020 at 11:26 AM Ankur Goenka  wrote:
>>
>>> Hi Amit,
>>>
>>> As you don't have any GroupByKey or trigger in your pipeline, you don't
>>> need to do allowed lateness.
>>> For unbounded source, Global window will never fire a trigger or emit
>>> GroupByKey.
>>> In the code you linked, a trigger is used which uses allowedLateness.
>>>
>>

Re: Default WindowFn for Unbounded source

2020-03-31 Thread amit kumar
Thanks Ankur for your reply.

By default the allowed lateness for a global window is zero but we can also
set  it to be non-zero which will be used in the downstream transforms
where group by or window into with trigger is happening ?
 (using allowedTimeStampSkew for unbounded sources/ sources which have
timestamped elements).

In both scenarios which I described earlier for *source transforms* is it
possible that the pipeline will drop data if I do not
specify allowedTimeStampSkew/ allowedLateness at the source
transforms(given I have late arriving data)? Can I just set allowed
lateness in the transform where I do groupBy or windowInto rather than
source.

In case of TextIO.read which reads from a bounded source and I assign
Timestamps to all elements in the second transform, will it be useful in
this case as well to set allowedTimeStampSkew after assigning timestamps? I
am trying to understand how the elements will be available after assigning
timestamps (Given all files are present on file system), will they be
ordered by timestamp, can some elements be read after watermark has
progressed above an element's event time  ?


TextIO.Read.
 |. Bounded source
 |. Global Window
 |.  -infinity watermark
apply
WithTimeStamps (Based on a timestamp attribute in file)
   |.   timestamped elements (watermark starts from -infinity and follows
the timestamp from timestamp attribute)
   |.   Global Window


Regards,
Amit

On Tue, Mar 31, 2020 at 11:26 AM Ankur Goenka  wrote:

> Hi Amit,
>
> As you don't have any GroupByKey or trigger in your pipeline, you don't
> need to do allowed lateness.
> For unbounded source, Global window will never fire a trigger or emit
> GroupByKey.
> In the code you linked, a trigger is used which uses allowedLateness.
>
> Thanks,
> Ankur
>
> On Tue, Mar 31, 2020 at 11:20 AM amit kumar  wrote:
>
>> Thanks Jan!
>> I have a question based on this on Global Window and allowed lateness,
>> with default trigger for the following
>>  scenarios:
>>
>> Case 1-
>> TextIO.Read.
>>  |. Bounded source
>>  |. Global Window
>>  |.  -infinity watermark
>> apply
>> WithTimeStamps (Based on a timestamp attribute in file)
>>|.   timestamped elements (watermark starts from -infinity and follows
>> the timestamp from timestamp attribute)
>>|.   Global Window
>>|. (Will I never need to do allowedLateness in this case with default
>> trigger? Will there be any benefit since the window is global and watermark
>> will pass the end of window when everything is processed ?  )
>>
>>
>> Case 2 -
>> KinesisIO.read
>> | .Unbounded Source
>> |. Default Global Window
>> |. watermark based on arrival time
>>  apply
>> WithTimeStamps (Based on a timestamp attribute from the stream)
>>|.   timestamped elements  ( watermark follows the timestamp from
>> timestamp attribute)
>>|.   Global Window
>>|. Watermark based on event timestamp.
>>| Same question here will there be any benefit of using
>> allowedLateness since window is global ?
>>
>> In the code example below allowedLateness is used for global window ?
>>
>> https://github.com/apache/beam/blob/828b897a2439437d483b1bd7f2a04871f077bde0/examples/java/src/main/java/org/apache/beam/examples/complete/game/LeaderBoard.java#L307
>>
>> Regards,
>> Amit
>>
>> On Tue, Mar 31, 2020 at 2:34 AM Jan Lukavský  wrote:
>>
>>> Hi Amit,
>>>
>>> the window function applied by default is
>>> WindowingStrategy.globalDefault(), [1] - global window with zero allowed
>>> lateness.
>>>
>>> Cheers,
>>>
>>>   Jan
>>>
>>> [1]
>>>
>>> https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/values/WindowingStrategy.java#L105
>>>
>>> On 3/31/20 10:22 AM, amit kumar wrote:
>>> > Hi All,
>>> >
>>> > Is there a default WindowFn that gets applied to elements of an
>>> > unbounded source.
>>> >
>>> > For example, if I have a Kinesis input source ,for which all elements
>>> > are timestamped with ArrivalTime, what will be the default windowing
>>> > applied to the output of read transform ?
>>> >
>>> > Is this runner dependent ?
>>> >
>>> > Regards,
>>> > Amit
>>>
>>


Re: Default WindowFn for Unbounded source

2020-03-31 Thread amit kumar
Thanks Ankur for your reply.

By default the allowed lateness for a global window is zero but we can also
set  it to be non-zero which will be used in the downstream transforms
where group by or window into with trigger is happening ?
 (using allowedTimeStampSkew for unbounded sources/ sources which have
timestamped elements).

In both scenarios which I described earlier for *source transforms* is it
possible that the pipeline will drop data if I do not
specify allowedTimeStampSkew/ allowedLateness at the source
transforms(given I have late arriving data)? Can I just set allowed
lateness in the transform where I do groupBy or windowInto rather than
source.

In case of TextIO.read which reads from a bounded source and I assign
Timestamps to all elements in the second transform, will it be useful in
this case as well to set allowedTimeStampSkew at source transform? I am
trying to understand how the elements will be available after assigning
timestamps (Given all files are present on file system), will they be
ordered by timestamp, can some elements be read after watermark has
progressed above an element's event time  ?


TextIO.Read.
 |. Bounded source
 |. Global Window
 |.  -infinity watermark
apply
WithTimeStamps (Based on a timestamp attribute in file)
   |.   timestamped elements (watermark starts from -infinity and follows
the timestamp from timestamp attribute)
   |.   Global Window


Regards,
Amit



 have an unbounded source and which by default will have global windows and

n the scenario I provided if I have downstream transforms to do group by
or window into with triggers will allowed lateness be useful in that
scenario at the source transforms?  If allowedLateness only pushes back the
timestamp of the element then it seems it will be useful.

In case of TextIO.Read

TextIO.Read.
 |. Bounded source
 |. Global Window
 |.  -infinity watermark
apply
WithTimeStamps (Based on a timestamp attribute in file)
   |.   timestamped elements (watermark starts from -infinity and follows
the timestamp from timestamp attribute)
   |.   Global Window
   |. (Will I never need to do allowedLateness in this case with default
trigger? Will there be any benefit since the window is global and watermark
will pass the end of window when everything is processed ?  )






On Tue, Mar 31, 2020 at 11:20 AM amit kumar  wrote:

> Thanks Jan!
> I have a question based on this on Global Window and allowed lateness,
> with default trigger for the following
>  scenarios:
>
> Case 1-
> TextIO.Read.
>  |. Bounded source
>  |. Global Window
>  |.  -infinity watermark
> apply
> WithTimeStamps (Based on a timestamp attribute in file)
>|.   timestamped elements (watermark starts from -infinity and follows
> the timestamp from timestamp attribute)
>|.   Global Window
>|. (Will I never need to do allowedLateness in this case with default
> trigger? Will there be any benefit since the window is global and watermark
> will pass the end of window when everything is processed ?  )
>
>
> Case 2 -
> KinesisIO.read
> | .Unbounded Source
> |. Default Global Window
> |. watermark based on arrival time
>  apply
> WithTimeStamps (Based on a timestamp attribute from the stream)
>|.   timestamped elements  ( watermark follows the timestamp from
> timestamp attribute)
>|.   Global Window
>|. Watermark based on event timestamp.
>| Same question here will there be any benefit of using
> allowedLateness since window is global ?
>
> In the code example below allowedLateness is used for global window ?
>
> https://github.com/apache/beam/blob/828b897a2439437d483b1bd7f2a04871f077bde0/examples/java/src/main/java/org/apache/beam/examples/complete/game/LeaderBoard.java#L307
>
> Regards,
> Amit
>
> On Tue, Mar 31, 2020 at 2:34 AM Jan Lukavský  wrote:
>
>> Hi Amit,
>>
>> the window function applied by default is
>> WindowingStrategy.globalDefault(), [1] - global window with zero allowed
>> lateness.
>>
>> Cheers,
>>
>>   Jan
>>
>> [1]
>>
>> https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/values/WindowingStrategy.java#L105
>>
>> On 3/31/20 10:22 AM, amit kumar wrote:
>> > Hi All,
>> >
>> > Is there a default WindowFn that gets applied to elements of an
>> > unbounded source.
>> >
>> > For example, if I have a Kinesis input source ,for which all elements
>> > are timestamped with ArrivalTime, what will be the default windowing
>> > applied to the output of read transform ?
>> >
>> > Is this runner dependent ?
>> >
>> > Regards,
>> > Amit
>>
>


Re: Default WindowFn for Unbounded source

2020-03-31 Thread amit kumar
Thanks Jan!
I have a question based on this on Global Window and allowed lateness, with
default trigger for the following
 scenarios:

Case 1-
TextIO.Read.
 |. Bounded source
 |. Global Window
 |.  -infinity watermark
apply
WithTimeStamps (Based on a timestamp attribute in file)
   |.   timestamped elements (watermark starts from -infinity and follows
the timestamp from timestamp attribute)
   |.   Global Window
   |. (Will I never need to do allowedLateness in this case with default
trigger? Will there be any benefit since the window is global and watermark
will pass the end of window when everything is processed ?  )


Case 2 -
KinesisIO.read
| .Unbounded Source
|. Default Global Window
|. watermark based on arrival time
 apply
WithTimeStamps (Based on a timestamp attribute from the stream)
   |.   timestamped elements  ( watermark follows the timestamp from
timestamp attribute)
   |.   Global Window
   |. Watermark based on event timestamp.
   | Same question here will there be any benefit of using allowedLateness
since window is global ?

In the code example below allowedLateness is used for global window ?
https://github.com/apache/beam/blob/828b897a2439437d483b1bd7f2a04871f077bde0/examples/java/src/main/java/org/apache/beam/examples/complete/game/LeaderBoard.java#L307

Regards,
Amit

On Tue, Mar 31, 2020 at 2:34 AM Jan Lukavský  wrote:

> Hi Amit,
>
> the window function applied by default is
> WindowingStrategy.globalDefault(), [1] - global window with zero allowed
> lateness.
>
> Cheers,
>
>   Jan
>
> [1]
>
> https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/values/WindowingStrategy.java#L105
>
> On 3/31/20 10:22 AM, amit kumar wrote:
> > Hi All,
> >
> > Is there a default WindowFn that gets applied to elements of an
> > unbounded source.
> >
> > For example, if I have a Kinesis input source ,for which all elements
> > are timestamped with ArrivalTime, what will be the default windowing
> > applied to the output of read transform ?
> >
> > Is this runner dependent ?
> >
> > Regards,
> > Amit
>


Default WindowFn for Unbounded source

2020-03-31 Thread amit kumar
Hi All,

Is there a default WindowFn that gets applied to elements of an unbounded
source.

For example, if I have a Kinesis input source ,for which all elements are
timestamped with ArrivalTime, what will be the default windowing applied to
the output of read transform ?

Is this runner dependent ?

Regards,
Amit


Discrete Transforms vs One Single transform

2020-02-20 Thread amit kumar
Hi All,

I am looking for inputs to understand the effects of converting multiple
discrete transforms into one single transformation. (and performing all
steps into one single PTransform).

What is better approach, multiple discrete transforms vs one single
transform with lambdas and multiple functions ?

I wanted to understand the effect of combining multiple transforms into one
single transform and doing everything in a lambda via Functions, will there
be any affect in performance or debugging, metrics or any other factors and
best practices?

Version A
PCollection myRecords = pbegin
.apply("Kinesis Source", readfromKinesis()) //transform1
.apply(MapElements
.into(TypeDescriptors.strings())
.via(record -> new String(record.getDataAsBytes(
//transform2
.apply(convertByteStringToJsonNode()) //transform3
.apply(schematizeElements()); //transform4

Version B
 PCollection myRecords = pbegin
.apply("Kinesis Source", readfromKinesis()) transform1
.apply( inputKinesisRecord -> {
String record = inputKinesisRecord.getDataAsBytes();
JsonNode jsonNode = convertByteStringToJsonNode(record);
SchematizedElement outputElement =
getSchematzedElement(jsonNode))
return outputElement;  }) transform2


Thanks in advance!
Amit


Re: Contributor permission for Beam Jira tickets

2019-11-12 Thread amit kumar
THanks!

On Tue, Nov 12, 2019 at 3:49 PM Kenneth Knowles  wrote:

> Done. Welcome!
>
> On Tue, Nov 12, 2019 at 3:40 PM amit kumar  wrote:
>
>> Hi Beam Devs,
>>
>> I am Amit from Godaddy and I am looking to contribute to Beam.
>> Could you please add me as a contributor. My Id is - amitkumar27
>>
>> Regards,
>> Amit
>>
>> On Wed, Nov 6, 2019 at 9:59 AM amit kumar  wrote:
>>
>>> Hi Beam Devs,
>>>
>>> I am Amit from Godaddy and I am looking to contribute to Beam.
>>> Could you please add me as a contributor and a subscriber to the Dev
>>> mailing list.
>>>
>>> Regards,
>>> Amit
>>>
>>


Re: Contributor permission for Beam Jira tickets

2019-11-12 Thread amit kumar
Hi Beam Devs,

I am Amit from Godaddy and I am looking to contribute to Beam.
Could you please add me as a contributor. My Id is - amitkumar27

Regards,
Amit

On Wed, Nov 6, 2019 at 9:59 AM amit kumar  wrote:

> Hi Beam Devs,
>
> I am Amit from Godaddy and I am looking to contribute to Beam.
> Could you please add me as a contributor and a subscriber to the Dev
> mailing list.
>
> Regards,
> Amit
>