Re: Issues building/running 2.25 on java 8

2020-11-06 Thread Steve Niemitz
I downgraded google_cloud_bigdataoss from 2.1.5 back to 2.1.3, which was
recently upgraded [1], and that fixed the issue.  It looks like it was
transitively pulling in protobuf 3.13.0, which isn't compatible with java
8(?!??).

[1]
https://github.com/apache/beam/commit/7fec038bf9e3861462744ba5522208a4b9d15b85#diff-0435a83a413ec063bf7e682cadcd56776cd18fc878f197cc99a65fc231ef2047

On Fri, Nov 6, 2020 at 6:27 PM Steve Niemitz  wrote:

> yeah, I built it via:
> JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64 ./gradlew --no-daemon
> -Ppublishing -PnoSigning publishMavenJavaPublicationToMavenLocal
>
> For me java8 is also my default
>
> On Fri, Nov 6, 2020 at 6:25 PM Kyle Weaver  wrote:
>
>> Do you have JAVA_HOME set? (possibly related:
>> https://issues.apache.org/jira/browse/BEAM-11080)
>>
>> On Fri, Nov 6, 2020 at 3:13 PM Steve Niemitz  wrote:
>>
>>> I'm trying out 2.25 (built from source, using java 8), and running into
>>> this error, both on the direct runner and dataflow:
>>>
>>> Caused by: java.lang.NoSuchMethodError:
>>> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>>> at
>>> com.google.protobuf.NioByteString.copyToInternal(NioByteString.java:112)
>>> at com.google.protobuf.ByteString.toByteArray(ByteString.java:695)
>>> at com.google.protobuf.NioByteString.writeTo(NioByteString.java:123)
>>> at
>>> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:67)
>>> at
>>> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:37)
>>> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:74)
>>> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:68)
>>>
>>> It seems like this was introduced in protobuf 3.12.4 based on this issue
>>> I found [1]
>>>
>>> Am I doing something wrong with my build? Or am I just hitting an
>>> untested combo here?
>>>
>>> [1] https://github.com/protocolbuffers/protobuf/issues/7827
>>>
>>


Re: Issues building/running 2.25 on java 8

2020-11-06 Thread Steve Niemitz
yeah, I built it via:
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64 ./gradlew --no-daemon
-Ppublishing -PnoSigning publishMavenJavaPublicationToMavenLocal

For me java8 is also my default

On Fri, Nov 6, 2020 at 6:25 PM Kyle Weaver  wrote:

> Do you have JAVA_HOME set? (possibly related:
> https://issues.apache.org/jira/browse/BEAM-11080)
>
> On Fri, Nov 6, 2020 at 3:13 PM Steve Niemitz  wrote:
>
>> I'm trying out 2.25 (built from source, using java 8), and running into
>> this error, both on the direct runner and dataflow:
>>
>> Caused by: java.lang.NoSuchMethodError:
>> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>> at
>> com.google.protobuf.NioByteString.copyToInternal(NioByteString.java:112)
>> at com.google.protobuf.ByteString.toByteArray(ByteString.java:695)
>> at com.google.protobuf.NioByteString.writeTo(NioByteString.java:123)
>> at
>> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:67)
>> at
>> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:37)
>> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:74)
>> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:68)
>>
>> It seems like this was introduced in protobuf 3.12.4 based on this issue
>> I found [1]
>>
>> Am I doing something wrong with my build? Or am I just hitting an
>> untested combo here?
>>
>> [1] https://github.com/protocolbuffers/protobuf/issues/7827
>>
>


Re: Issues building/running 2.25 on java 8

2020-11-06 Thread Kyle Weaver
Do you have JAVA_HOME set? (possibly related:
https://issues.apache.org/jira/browse/BEAM-11080)

On Fri, Nov 6, 2020 at 3:13 PM Steve Niemitz  wrote:

> I'm trying out 2.25 (built from source, using java 8), and running into
> this error, both on the direct runner and dataflow:
>
> Caused by: java.lang.NoSuchMethodError:
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
> at com.google.protobuf.NioByteString.copyToInternal(NioByteString.java:112)
> at com.google.protobuf.ByteString.toByteArray(ByteString.java:695)
> at com.google.protobuf.NioByteString.writeTo(NioByteString.java:123)
> at
> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:67)
> at
> org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:37)
> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:74)
> at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:68)
>
> It seems like this was introduced in protobuf 3.12.4 based on this issue I
> found [1]
>
> Am I doing something wrong with my build? Or am I just hitting an untested
> combo here?
>
> [1] https://github.com/protocolbuffers/protobuf/issues/7827
>


Issues building/running 2.25 on java 8

2020-11-06 Thread Steve Niemitz
I'm trying out 2.25 (built from source, using java 8), and running into
this error, both on the direct runner and dataflow:

Caused by: java.lang.NoSuchMethodError:
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at com.google.protobuf.NioByteString.copyToInternal(NioByteString.java:112)
at com.google.protobuf.ByteString.toByteArray(ByteString.java:695)
at com.google.protobuf.NioByteString.writeTo(NioByteString.java:123)
at
org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:67)
at
org.apache.beam.sdk.extensions.protobuf.ByteStringCoder.encode(ByteStringCoder.java:37)
at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:74)
at org.apache.beam.sdk.coders.DelegateCoder.encode(DelegateCoder.java:68)

It seems like this was introduced in protobuf 3.12.4 based on this issue I
found [1]

Am I doing something wrong with my build? Or am I just hitting an untested
combo here?

[1] https://github.com/protocolbuffers/protobuf/issues/7827


Re: Bigtable for BeamSQL - question about the schema design

2020-11-06 Thread Ismaël Mejía
Thanks for the references Rui. I think it is worth to consider how
open source systems do it.
The great thing about this is that we could 'easily' map Piotr's work
for Bigtable to HBase too once it is done.

On Fri, Nov 6, 2020 at 8:22 PM Rui Wang  wrote:
>
> Another two references are from how Flink and Spark uses HBase by SQL:
>
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/hbase.html
> https://stackoverflow.com/questions/39530938/sparksql-on-hbase-tables
>
> -Rui
>
> On Thu, Nov 5, 2020 at 9:46 AM Piotr Szuberski  
> wrote:
>>
>> Thanks for the resources! I'll try to follow the BQ approach. I'd also add 
>> something like flattened schema so the user can use simple types only. It 
>> would be limited to BINARY values though. Something like:
>> CREATE EXTERNAL TABLE(
>>   key VARCHAR NOT NULL,
>>   family VARCHAR NOT NULL,
>>   column VARCHAR NOT NULL,
>>   value BINARY NOT NULL,
>>   timestampMicros BIGINT NOT NULL
>> )
>>  The cells array would be flattened (denormalized) and easy to use. In case 
>> of single-valued cells it would also be quite efficient.
>>
>> On 2020/11/05 00:22:44, Brian Hulette  wrote:
>> > I think we should take a look at how BigTable is integrated with other SQL
>> > systems. For example we could get some inspiration from BigQuery's support
>> > for querying BigTable data [1]. It looks like by default it uses something
>> > like (1), but they recognize this is difficult to process with SQL, so they
>> > have an option you can set to elevate certain columns as sub-fields (more
>> > like (2)), and you can also indicate you only want to get the latest value
>> > for each column.
>> >
>> > In any case this may be a good candidate for not requiring the user to
>> > actually specify a schema, and instead letting the table be fully
>> > determined by options.
>> >
>> > [1] https://cloud.google.com/bigquery/external-data-bigtable
>> >
>> > On Tue, Nov 3, 2020 at 11:41 PM Piotr Szuberski 
>> > 
>> > wrote:
>> >
>> > > I've dug the topic a bit and I think the 2nd approach will fit better. 
>> > > The
>> > > schema in Bigtable is not supposed to change that often and specifying 
>> > > our
>> > > own schema is more SQL-like and will cause less potential trouble.
>> > >
>> > > On 2020/11/03 11:01:57, Piotr Szuberski 
>> > > wrote:
>> > > > I'm going to write Bigtable table for BeamSQL and I have a question
>> > > about the schema design, which one would be preferrable.
>> > > >
>> > > > Bigtable stores its data in a table with rows that contain a key and
>> > > 3-dimensional array where the 1st dimension is families with a names, 2nd
>> > > dimension is columns with qualifiers and the 3rd cells containing 
>> > > timestamp
>> > > and value.
>> > > >
>> > > > Two design solutions come to mind:
>> > > > 1) Fix schema to be a generic Bigtable row:
>> > > >
>> > > > Row(key, Array(Row(family, Array(Row(qualifier, Array(Row(value,
>> > > timestamp)))
>> > > >
>> > > > Then the table creation definition would always be in form:
>> > > >
>> > > > CREATE TABLE bigtableexample1()
>> > > > TYPE 'bigtable'
>> > > > LOCATION '
>> > > https://googleapis.com/bigtable/projects/projectId/instances/instanceId/tables/tableId
>> > > '
>> > > >
>> > > > 2) Let the user design his schema by providing the desired families and
>> > > columns it sth like:
>> > > > CREATE TABLE bigtableexample2(
>> > > >   key VARCHAR,
>> > > >   family1 ROW<
>> > > > column1 ROW<
>> > > >   cells ARRAY> > > > value VARCHAR,
>> > > > timestamp BIGINT
>> > > >   >>
>> > > > >,
>> > > > column2 ROW<
>> > > >   cells ARRAY> > > > value VARCHAR,
>> > > > timestamp BIGINT
>> > > >   >>
>> > > > >
>> > > >   >
>> > > > )
>> > > > TYPE 'bigtable'
>> > > > LOCATION '
>> > > https://googleapis.com/bigtable/projects/projectId/instances/instanceId/tables/tableId
>> > > '
>> > > >
>> > > > For me the 1st approach is more user friendly (typing schema from the
>> > > 2nd would be troublesome) and more elastic especially when the row's 
>> > > schema
>> > > (families and columns) changes and a user wants to perform 'SELECT * from
>> > > bigtableexampleX'.
>> > > >
>> > > > WDYT? I'd welcome any feedback. Maybe there is some 3rd option that 
>> > > > will
>> > > be a better one?
>> > > >
>> > >
>> >


Re: Beam Dependency Check Report (2020-11-02)

2020-11-06 Thread Ismaël Mejía
The report is useful for awareness, the issue is that we cannot
systematically update these dependencies so this diminishes the value of
the report.

I don't know if we can eventually filter some things of the report or
better to create a section for 'sensitive' dependencies that we cannot
update systematically. Can someone more familiar with the report tell if
this is possible?

Also with recent changes we are starting testing versions for multiple
versions so this could be less of an issue for some systems like Kafka and
hopefully soon Hadoop and others.



On Thu, Nov 5, 2020 at 7:46 PM Kenneth Knowles  wrote:

> I think at a minimum it shouldn't recommend major version upgrades. Almost
> all projects do breaking changes there. And really a ton of projects break
> things at minor versions too.
>
> I don't have too strong an opinion. I very rarely read the report. Just
> wanted to tie this together with the "sensible upgrades" thread. We have a
> bot that is suggesting a lot of upgrades in a too-mechanical fashion
> perhaps?
>
> Kenn
>
> On Thu, Nov 5, 2020 at 10:41 AM Pablo Estrada  wrote:
>
>> Sorry, I missed Ismael's comment, but - I'd like to understand how this
>> report falls short. Does it flag certain dependency versions as old even
>> though they're still the 'de-facto' standard version?
>> Does it make sense to exclude dependencies that we are aware of (e.g.
>> Avro / Spark / idk) while keeping the rest of the report?
>>
>> Best
>> -P.
>>
>>
>>
>> On Thu, Nov 5, 2020 at 10:13 AM Kenneth Knowles  wrote:
>>
>>> Ismaël pointed out that the dependency upgrades recommended by this bot
>>> are often not a good idea. Should we disable it?
>>>
>>> Kenn
>>>
>>> On Mon, Nov 2, 2020 at 4:31 AM Apache Jenkins Server <
>>> jenk...@builds.apache.org> wrote:
>>>
 High Priority Dependency Updates Of Beam Python SDK:
 *Dependency Name* *Current Version* *Latest Version* *Release Date Of
 the Current Used Version* *Release Date Of The Latest Release* *JIRA
 Issue*
 chromedriver-binary 
 86.0.4240.22.0 87.0.4280.20.0 2020-09-07 2020-10-19 BEAM-10426
 
 dill  0.3.1.1 0.3.3 2019-10-07
 2020-11-02 BEAM-11167
 
 google-cloud-bigquery 
 1.28.0 2.2.0 2020-10-05 2020-10-26 BEAM-5537
 
 google-cloud-dlp  1.0.0
 2.0.0 2020-06-29 2020-10-05 BEAM-10344
 
 google-cloud-language 
 1.3.0 2.0.0 2020-10-26 2020-10-26 BEAM-8
 
 google-cloud-pubsub 
 1.7.0 2.1.0 2020-07-20 2020-10-05 BEAM-5539
 
 google-cloud-vision 
 1.0.0 2.0.0 2020-03-24 2020-10-05 BEAM-9581
 
 grpcio-tools  1.30.0 1.33.2
 2020-06-29 2020-11-02 BEAM-9582
 
 mock  2.0.0 4.0.2 2019-05-20 2020-10-05
 BEAM-7369 
 mypy-protobuf  1.18 1.23
 2020-03-24 2020-06-29 BEAM-10346
 
 nbconvert  5.6.1 6.0.7 2020-10-05
 2020-10-05 BEAM-11007
 
 Pillow  7.2.0 8.0.1 2020-10-19
 2020-10-26 BEAM-11071
 
 pyarrow  0.17.1 2.0.0 2020-07-27
 2020-10-26 BEAM-10582
 
 PyHamcrest  1.10.1 2.0.2
 2020-01-20 2020-07-08 BEAM-9155
 
 pytest  4.6.11 6.1.2 2020-07-08
 2020-11-02 BEAM-8606 
 pytest-xdist  1.34.0 2.1.0
 2020-08-17 2020-08-28 BEAM-10713
 
 tenacity  5.1.5 6.2.0 2019-11-11
 2020-06-29 BEAM-8607  High
 Priority Dependency Updates Of Beam Java SDK:
 *Dependency Name* *Current Version* *Latest Version* *Release Date Of
 the Current Used Version* *Release Date Of The Latest Release* 

Re: Upgrade instruction from TimerDataCoder to TimerDataCoderV2

2020-11-06 Thread Ke Wu
Thank you Jeff for the quick reply. Does this mean, if a stateful job using the 
old coder wants to start using timer family id, then it needs to discard all 
its state first before it can switch to use V2 coder to have timer family id 
support?

Best,
Ke

> On Nov 6, 2020, at 11:27 AM, Jeff Klukas  wrote:
> 
> Ke - You are correct that generally data encoded with a previous coder 
> version cannot be read with an updated coder. The formats have to match 
> exactly.
> 
> As far as I'm aware, it's necessary to flush a job and start with fresh state 
> in order to upgrade coders.
> 
> On Fri, Nov 6, 2020 at 2:13 PM Ke Wu  > wrote:
> Hello,
> 
> I found that TimerDataCoderV2 is created to include timer family id and 
> output timestamps fields in TimerData. In addition, the new fields are 
> encoded between old fields, which I suppose V2 coder cannot decode and data 
> that is encoded by V1 coder and vice versus. My ask here is, how should we 
> properly upgrade without losing existing states persisted in a store?
> 
> Best,
> Ke



Re: Upgrade instruction from TimerDataCoder to TimerDataCoderV2

2020-11-06 Thread Jeff Klukas
Ke - You are correct that generally data encoded with a previous coder
version cannot be read with an updated coder. The formats have to match
exactly.

As far as I'm aware, it's necessary to flush a job and start with fresh
state in order to upgrade coders.

On Fri, Nov 6, 2020 at 2:13 PM Ke Wu  wrote:

> Hello,
>
> I found that TimerDataCoderV2 is created to include timer family id and
> output timestamps fields in TimerData. In addition, the new fields are
> encoded between old fields, which I suppose V2 coder cannot decode and data
> that is encoded by V1 coder and vice versus. My ask here is, how should we
> properly upgrade without losing existing states persisted in a store?
>
> Best,
> Ke


Re: Bigtable for BeamSQL - question about the schema design

2020-11-06 Thread Rui Wang
Another two references are from how Flink and Spark uses HBase by SQL:

https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/hbase.html
https://stackoverflow.com/questions/39530938/sparksql-on-hbase-tables

-Rui

On Thu, Nov 5, 2020 at 9:46 AM Piotr Szuberski 
wrote:

> Thanks for the resources! I'll try to follow the BQ approach. I'd also add
> something like flattened schema so the user can use simple types only. It
> would be limited to BINARY values though. Something like:
> CREATE EXTERNAL TABLE(
>   key VARCHAR NOT NULL,
>   family VARCHAR NOT NULL,
>   column VARCHAR NOT NULL,
>   value BINARY NOT NULL,
>   timestampMicros BIGINT NOT NULL
> )
>  The cells array would be flattened (denormalized) and easy to use. In
> case of single-valued cells it would also be quite efficient.
>
> On 2020/11/05 00:22:44, Brian Hulette  wrote:
> > I think we should take a look at how BigTable is integrated with other
> SQL
> > systems. For example we could get some inspiration from BigQuery's
> support
> > for querying BigTable data [1]. It looks like by default it uses
> something
> > like (1), but they recognize this is difficult to process with SQL, so
> they
> > have an option you can set to elevate certain columns as sub-fields (more
> > like (2)), and you can also indicate you only want to get the latest
> value
> > for each column.
> >
> > In any case this may be a good candidate for not requiring the user to
> > actually specify a schema, and instead letting the table be fully
> > determined by options.
> >
> > [1] https://cloud.google.com/bigquery/external-data-bigtable
> >
> > On Tue, Nov 3, 2020 at 11:41 PM Piotr Szuberski <
> piotr.szuber...@polidea.com>
> > wrote:
> >
> > > I've dug the topic a bit and I think the 2nd approach will fit better.
> The
> > > schema in Bigtable is not supposed to change that often and specifying
> our
> > > own schema is more SQL-like and will cause less potential trouble.
> > >
> > > On 2020/11/03 11:01:57, Piotr Szuberski 
> > > wrote:
> > > > I'm going to write Bigtable table for BeamSQL and I have a question
> > > about the schema design, which one would be preferrable.
> > > >
> > > > Bigtable stores its data in a table with rows that contain a key and
> > > 3-dimensional array where the 1st dimension is families with a names,
> 2nd
> > > dimension is columns with qualifiers and the 3rd cells containing
> timestamp
> > > and value.
> > > >
> > > > Two design solutions come to mind:
> > > > 1) Fix schema to be a generic Bigtable row:
> > > >
> > > > Row(key, Array(Row(family, Array(Row(qualifier, Array(Row(value,
> > > timestamp)))
> > > >
> > > > Then the table creation definition would always be in form:
> > > >
> > > > CREATE TABLE bigtableexample1()
> > > > TYPE 'bigtable'
> > > > LOCATION '
> > >
> https://googleapis.com/bigtable/projects/projectId/instances/instanceId/tables/tableId
> > > '
> > > >
> > > > 2) Let the user design his schema by providing the desired families
> and
> > > columns it sth like:
> > > > CREATE TABLE bigtableexample2(
> > > >   key VARCHAR,
> > > >   family1 ROW<
> > > > column1 ROW<
> > > >   cells ARRAY > > > value VARCHAR,
> > > > timestamp BIGINT
> > > >   >>
> > > > >,
> > > > column2 ROW<
> > > >   cells ARRAY > > > value VARCHAR,
> > > > timestamp BIGINT
> > > >   >>
> > > > >
> > > >   >
> > > > )
> > > > TYPE 'bigtable'
> > > > LOCATION '
> > >
> https://googleapis.com/bigtable/projects/projectId/instances/instanceId/tables/tableId
> > > '
> > > >
> > > > For me the 1st approach is more user friendly (typing schema from the
> > > 2nd would be troublesome) and more elastic especially when the row's
> schema
> > > (families and columns) changes and a user wants to perform 'SELECT *
> from
> > > bigtableexampleX'.
> > > >
> > > > WDYT? I'd welcome any feedback. Maybe there is some 3rd option that
> will
> > > be a better one?
> > > >
> > >
> >
>


Upgrade instruction from TimerDataCoder to TimerDataCoderV2

2020-11-06 Thread Ke Wu
Hello,

I found that TimerDataCoderV2 is created to include timer family id and output 
timestamps fields in TimerData. In addition, the new fields are encoded between 
old fields, which I suppose V2 coder cannot decode and data that is encoded by 
V1 coder and vice versus. My ask here is, how should we properly upgrade 
without losing existing states persisted in a store?

Best,
Ke

Re: Contributor permissions for Beam Jira

2020-11-06 Thread Ke Wu
Thank you, Alexey!

> On Nov 6, 2020, at 5:58 AM, Alexey Romanenko  wrote:
> 
> Done, I added you to contributors list.
> 
> Welcome! 
> 
> Please, take a look on Beam Contribution Guide if not yet =)
> https://beam.apache.org/contribute/
> 
> Alexey
> 
>> On 5 Nov 2020, at 20:10, Ke Wu  wrote:
>> 
>> Absolutely, my jira username is kw2542
>> 
>> Thanks,
>> Ke
>> 
>>> On Nov 5, 2020, at 2:47 AM, Alexey Romanenko  
>>> wrote:
>>> 
>>> Hi,
>>> 
>>> Sure. Could you provide your Jira username, please?
>>> 
 On 5 Nov 2020, at 00:48, Ke Wu  wrote:
 
 Hello, 
 
 I am working at Samza team at LinkedIn and I would like to contribute to 
 Samza runner in Beam. Could I please have permission to add/assign tickets 
 on the Beam Jira?
 
 Best,
 Ke
>>> 
>> 
> 



Re: Looking for a PMC member to help with website development

2020-11-06 Thread Brian Hulette
Feel free to triage some reviews to me Pablo :)
For larger questions about dividing up contributions and code architecture
questions (items 2 and 3) - I think dev@ threads or the slack channel would
be a good place to discuss anything that can't happen in a GitHub review.
It's good for such discussions to be public and archived.

Brian

On Thu, Nov 5, 2020 at 12:45 PM Pablo Estrada  wrote:

> For code review / changes / merges, committers are all that's necessary,
> and I am happy to help with anything that requires extra PMC privileges
> (also happy to do code reviews).
>
> You can mention me in PRs, and I'm happy to try and triage to other active
> committers.
> Best
> -P.
>
>
> On Thu, Nov 5, 2020 at 12:02 PM Gris Cuevas  wrote:
>
>> The reason why I was asking a PMC member was to minimize dependencies in
>> permissions needed for code/reviews, commits, etc.
>>
>> I don't have full visibility to confirm that any committer could help
>> here. If that is the case then we can cast a broader net.
>>
>> Even if I am a committer, I can't help the team since I don't have enough
>> bandwidth nor extensive experience developing websites.
>>
>> On 2020/11/04 17:27:53, Austin Bennett 
>> wrote:
>> > And, @Griselda Cuevas  -- not meaning to change focus
>> of
>> > thread.  It seemed you might have the ability to cast a wider net.
>> But, I
>> > also might be off on the differences in roles/rights/responsibilities.
>> >
>> > On Wed, Nov 4, 2020 at 9:26 AM Austin Bennett <
>> whatwouldausti...@gmail.com>
>> > wrote:
>> >
>> > > To understand differences in PMC vs committer --> would like to
>> understand
>> > > why a committer doesn't suffice for the listed requests (and @Griselda
>> > > Cuevas   you are a committer, but it seems you'd
>> > > potentially just want another committer to also review).
>> > >
>> > > It seems the website is less about shifting the direction of the
>> project
>> > > or exposing APIs that we may feel compelled to support long term,
>> which is
>> > > why I naively assume PMC not especially needed (and it does look like
>> this
>> > > is being done with full visibility to PMC at a high level).
>> > >
>> > >
>> > > On Wed, Nov 4, 2020 at 8:35 AM Gris Cuevas  wrote:
>> > >
>> > >> Hi folks,
>> > >>
>> > >> We're going to move into development phase for the new website and we
>> > >> need a point of contact in the PMC who could help us with the
>> following:
>> > >> - Review of contribution
>> > >> - Input on implementation questions such as how to divide
>> contributions
>> > >> to make them easier to review/edit
>> > >> - Code architecture questions
>> > >>
>> > >> And other questions that come up from the development, would anyone
>> > >> volunteer to help us with this?
>> > >>
>> > >> Gris
>> > >>
>> > >
>> >
>>
>


Re: Contributor permissions for Beam Jira

2020-11-06 Thread Alexey Romanenko
Done, I added you to contributors list.

Welcome! 

Please, take a look on Beam Contribution Guide if not yet =)
https://beam.apache.org/contribute/

Alexey

> On 5 Nov 2020, at 20:10, Ke Wu  wrote:
> 
> Absolutely, my jira username is kw2542
> 
> Thanks,
> Ke
> 
>> On Nov 5, 2020, at 2:47 AM, Alexey Romanenko  
>> wrote:
>> 
>> Hi,
>> 
>> Sure. Could you provide your Jira username, please?
>> 
>>> On 5 Nov 2020, at 00:48, Ke Wu  wrote:
>>> 
>>> Hello, 
>>> 
>>> I am working at Samza team at LinkedIn and I would like to contribute to 
>>> Samza runner in Beam. Could I please have permission to add/assign tickets 
>>> on the Beam Jira?
>>> 
>>> Best,
>>> Ke
>> 
>