[jira] [Created] (AVRO-2480) Java Schema Parser adds unparsable files to known types

2019-07-23 Thread Dylan Patterson (JIRA)
Dylan Patterson created AVRO-2480:
-

 Summary: Java Schema Parser adds unparsable files to known types
 Key: AVRO-2480
 URL: https://issues.apache.org/jira/browse/AVRO-2480
 Project: Apache Avro
  Issue Type: Bug
  Components: java
Affects Versions: 1.8.2
Reporter: Dylan Patterson


Behavior is inconsistent of the internal types map when parsing multiple types. 
 If a schema fails to parse due to missing dependent type, the depending type 
still gets added - including it's (apparently unparsable) schema. Further this 
schema does not have the `isError()` flag set.  This only happens for record 
types.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (AVRO-2480) Java Schema Parser adds unparsable files to known types

2019-07-23 Thread Dylan Patterson (JIRA)


 [ 
https://issues.apache.org/jira/browse/AVRO-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dylan Patterson updated AVRO-2480:
--
Description: Behavior is inconsistent of the internal types map when 
parsing multiple types.  If a schema fails to parse due to missing dependent 
type, the depending type still gets added - including it's (apparently 
unparsable) schema. Further this schema does not have the `isError()` flag set. 
 This only happens for record types. It makes it largely impossible to process 
multiple dependent schemas  (was: Behavior is inconsistent of the internal 
types map when parsing multiple types.  If a schema fails to parse due to 
missing dependent type, the depending type still gets added - including it's 
(apparently unparsable) schema. Further this schema does not have the 
`isError()` flag set.  This only happens for record types.)

> Java Schema Parser adds unparsable files to known types
> ---
>
> Key: AVRO-2480
> URL: https://issues.apache.org/jira/browse/AVRO-2480
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.8.2
>Reporter: Dylan Patterson
>Priority: Major
>
> Behavior is inconsistent of the internal types map when parsing multiple 
> types.  If a schema fails to parse due to missing dependent type, the 
> depending type still gets added - including it's (apparently unparsable) 
> schema. Further this schema does not have the `isError()` flag set.  This 
> only happens for record types. It makes it largely impossible to process 
> multiple dependent schemas



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (AVRO-1953) ArrayIndexOutOfBoundsException in org.apache.avro.io.parsing.Symbol$Alternative.getSymbol

2019-07-23 Thread michael elbaz (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890975#comment-16890975
 ] 

michael elbaz edited comment on AVRO-1953 at 7/23/19 1:00 PM:
--

Hello is someone take care about this defect ? 
https://issues.apache.org/jira/browse/CAMEL-13737


was (Author: michael992):
Hello anyone take care about ? 
https://issues.apache.org/jira/browse/CAMEL-13737

> ArrayIndexOutOfBoundsException in 
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol
> -
>
> Key: AVRO-1953
> URL: https://issues.apache.org/jira/browse/AVRO-1953
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.4
>Reporter: Yong Zhang
>Priority: Major
>
> We are facing an issue when Avro MapReducer cannot process the avro file in 
> the reducer. 
> Here is the schema of our data:
> {
> "namespace" : "our package name",
> "type" : "record",
> "name" : "Lists",
> "fields" : [
> {"name" : "account_id", "type" : "long"},
> {"name" : "list_id", "type" : "string"},
> {"name" : "sequence_id", "type" : ["int", "null"]} ,
> {"name" : "name", "type" : ["string", "null"]},
> {"name" : "state", "type" : ["string", "null"]},
> {"name" : "description", "type" : ["string", "null"]},
> {"name" : "dynamic_filtered_list", "type" : ["int", "null"]},
> {"name" : "filter_criteria", "type" : ["string", "null"]},
> {"name" : "created_at", "type" : ["long", "null"]},
> {"name" : "updated_at", "type" : ["long", "null"]},
> {"name" : "deleted_at", "type" : ["long", "null"]},
> {"name" : "favorite", "type" : ["int", "null"]},
> {"name" : "delta", "type" : ["boolean", "null"]},
> {
> "name" : "list_memberships", "type" : {
> "type" : "array", "items" : {
> "name" : "ListMembership", "type" : "record",
> "fields" : [
> {"name" : "channel_id", "type" : "string"},
> {"name" : "created_at", "type" : ["long", "null"]},
> {"name" : "created_source", "type" : ["string", 
> "null"]},
> {"name" : "deleted_at", "type" : ["long", "null"]},
> {"name" : "sequence_id", "type" : ["int", "null"]}
> ]
> }
> }
> }
> ]
> }
> Our MapReduce job is to get the delta of the above dataset, and use our merge 
> logic to merge the latest change into the dataset.
> The whole MR job runs daily, and work fine for 18 months. During this time, 
> we saw 2 times the merge MapReduce job failed with following error (In the 
> reducer stage, which means the Avro data being read successfully, and send to 
> the reducers, which we sort the data based on the key and timestamp, so the 
> delta can be merged in the reducer side):
> java.lang.ArrayIndexOutOfBoundsException at 
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:364) at 
> org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229) at 
> org.apache.avro.io.parsing.Parser.advance(Parser.java:88) at 
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206) at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152) 
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:177)
>  at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:148) 
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139) 
> at 
> org.apache.avro.hadoop.io.AvroDeserializer.deserialize(AvroDeserializer.java:108)
>  at 
> org.apache.avro.hadoop.io.AvroDeserializer.deserialize(AvroDeserializer.java:48)
>  at 
> org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:142)
>  at 
> org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:117)
>  at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297)
>  at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:165) at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:652) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) at 
> org.apache.hadoop.mapred.Child$4.run(Child.java:255) at 
> java.security.AccessController.doPrivileged(AccessController.java:366) at 
> javax.security.auth.Subject.doAs(Subject.java:572) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)
>  at org.apache.hadoop.mapred.Child.main(Child.java:249)
> The MapReducer job will fail eventually in the reducer stage. I don't think 
> our 

[jira] [Commented] (AVRO-1953) ArrayIndexOutOfBoundsException in org.apache.avro.io.parsing.Symbol$Alternative.getSymbol

2019-07-23 Thread michael elbaz (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890975#comment-16890975
 ] 

michael elbaz commented on AVRO-1953:
-

Hello anyone take care about ? 
https://issues.apache.org/jira/browse/CAMEL-13737

> ArrayIndexOutOfBoundsException in 
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol
> -
>
> Key: AVRO-1953
> URL: https://issues.apache.org/jira/browse/AVRO-1953
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.4
>Reporter: Yong Zhang
>Priority: Major
>
> We are facing an issue when Avro MapReducer cannot process the avro file in 
> the reducer. 
> Here is the schema of our data:
> {
> "namespace" : "our package name",
> "type" : "record",
> "name" : "Lists",
> "fields" : [
> {"name" : "account_id", "type" : "long"},
> {"name" : "list_id", "type" : "string"},
> {"name" : "sequence_id", "type" : ["int", "null"]} ,
> {"name" : "name", "type" : ["string", "null"]},
> {"name" : "state", "type" : ["string", "null"]},
> {"name" : "description", "type" : ["string", "null"]},
> {"name" : "dynamic_filtered_list", "type" : ["int", "null"]},
> {"name" : "filter_criteria", "type" : ["string", "null"]},
> {"name" : "created_at", "type" : ["long", "null"]},
> {"name" : "updated_at", "type" : ["long", "null"]},
> {"name" : "deleted_at", "type" : ["long", "null"]},
> {"name" : "favorite", "type" : ["int", "null"]},
> {"name" : "delta", "type" : ["boolean", "null"]},
> {
> "name" : "list_memberships", "type" : {
> "type" : "array", "items" : {
> "name" : "ListMembership", "type" : "record",
> "fields" : [
> {"name" : "channel_id", "type" : "string"},
> {"name" : "created_at", "type" : ["long", "null"]},
> {"name" : "created_source", "type" : ["string", 
> "null"]},
> {"name" : "deleted_at", "type" : ["long", "null"]},
> {"name" : "sequence_id", "type" : ["int", "null"]}
> ]
> }
> }
> }
> ]
> }
> Our MapReduce job is to get the delta of the above dataset, and use our merge 
> logic to merge the latest change into the dataset.
> The whole MR job runs daily, and work fine for 18 months. During this time, 
> we saw 2 times the merge MapReduce job failed with following error (In the 
> reducer stage, which means the Avro data being read successfully, and send to 
> the reducers, which we sort the data based on the key and timestamp, so the 
> delta can be merged in the reducer side):
> java.lang.ArrayIndexOutOfBoundsException at 
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:364) at 
> org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229) at 
> org.apache.avro.io.parsing.Parser.advance(Parser.java:88) at 
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206) at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152) 
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:177)
>  at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:148) 
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139) 
> at 
> org.apache.avro.hadoop.io.AvroDeserializer.deserialize(AvroDeserializer.java:108)
>  at 
> org.apache.avro.hadoop.io.AvroDeserializer.deserialize(AvroDeserializer.java:48)
>  at 
> org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:142)
>  at 
> org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:117)
>  at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297)
>  at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:165) at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:652) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) at 
> org.apache.hadoop.mapred.Child$4.run(Child.java:255) at 
> java.security.AccessController.doPrivileged(AccessController.java:366) at 
> javax.security.auth.Subject.doAs(Subject.java:572) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)
>  at org.apache.hadoop.mapred.Child.main(Child.java:249)
> The MapReducer job will fail eventually in the reducer stage. I don't think 
> our data is corrupted, as they are read fine in the map stage. Every time we 
> got this error, we have to get the whole huge dataset from the source, then 
> rebuilt the AVRO, 

[jira] [Commented] (AVRO-2461) Add compression level support to avro-tools' fromjson and recodec

2019-07-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890916#comment-16890916
 ] 

Hudson commented on AVRO-2461:
--

SUCCESS: Integrated in Jenkins build AvroJava #706 (See 
[https://builds.apache.org/job/AvroJava/706/])
AVRO-2461: Add compression level support to fromjson and recodec (#576) 
(nandorkollar: 
[https://github.com/apache/avro/commit/ad9614acebcf686a67f052ba80ca8eb872402066])
* (edit) lang/java/tools/src/main/java/org/apache/avro/tool/Util.java
* (add) lang/java/tools/src/test/java/org/apache/avro/tool/TestUtil.java


> Add compression level support to avro-tools' fromjson and recodec
> -
>
> Key: AVRO-2461
> URL: https://issues.apache.org/jira/browse/AVRO-2461
> Project: Apache Avro
>  Issue Type: Improvement
>  Components: java, tools
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Major
> Fix For: 1.10.0
>
>
> These commands have the {{--level}} option, but it's applied only to deflate 
> and xz.
> {code}
> $ java -jar avro-tools-1.9.0.jar fromjson
> Expected 1 arg: input_file
> Option  Description
> --  ---
> --codec Compression codec (default: null)
> --levelCompression level (only applies to deflate and xz)
>   (default: -1)
> {code}
> Zstandard also has compression level feature, so it'd be useful if users can 
> specify it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (AVRO-2445) Remove Python <2.7 compatibility

2019-07-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890914#comment-16890914
 ] 

Hudson commented on AVRO-2445:
--

SUCCESS: Integrated in Jenkins build AvroJava #706 (See 
[https://builds.apache.org/job/AvroJava/706/])
AVRO-2445: Remove StoppableHTTPServer Polyfill (#567) (nandorkollar: 
[https://github.com/apache/avro/commit/efe1181bc76adc89a69f0763ad469386db84ed67])
* (edit) lang/py/src/avro/tool.py


> Remove Python <2.7 compatibility
> 
>
> Key: AVRO-2445
> URL: https://issues.apache.org/jira/browse/AVRO-2445
> Project: Apache Avro
>  Issue Type: Task
>  Components: python
>Affects Versions: 1.9.1
>Reporter: Michael A. Smith
>Assignee: Michael A. Smith
>Priority: Major
> Fix For: 1.10.0
>
>
> I'll send out a separate note to the dev list about this, but I would like to 
> remove all the polyfills for python 2.4, 2.5 and 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (AVRO-2466) Fix a malformed schema in the share/test/schemas directory

2019-07-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890915#comment-16890915
 ] 

Hudson commented on AVRO-2466:
--

SUCCESS: Integrated in Jenkins build AvroJava #706 (See 
[https://builds.apache.org/job/AvroJava/706/])
AVRO-2466: Fix a malformed schema in the share/test/schemas directory 
(nandorkollar: 
[https://github.com/apache/avro/commit/e0ae0cd8a36520907304d52f4a19668a3218d5ea])
* (edit) share/test/schemas/reserved.avsc


> Fix a malformed schema in the share/test/schemas directory
> --
>
> Key: AVRO-2466
> URL: https://issues.apache.org/jira/browse/AVRO-2466
> Project: Apache Avro
>  Issue Type: Bug
>  Components: misc
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 1.10.0
>
>
> The schema defined in share/test/schemas/reserved.avsc is invalid because of 
> its trailing comma:
> {code}
> $ python -c 'from avro.schema import parse; 
> parse(open("../../share/test/schemas/reserved.avsc").read())'
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/sekikn/repo/avro/lang/py/src/avro/schema.py", line 976, in parse
> json_data = json.loads(json_string)
>   File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
> raise ValueError(errmsg("Extra data", s, end, len(s)))
> avro.schema.SchemaParseException: Error parsing JSON: {"name": 
> "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> , error = Extra data: line 2 column 39 - line 3 column 1 (char 96 - 98)
> $ cat ../../share/test/schemas/reserved.avsc
> {"name": "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> {code}
> This file doesn't seem to be used as far as I investigated, but I'd rather 
> fix this than remove since it might be useful for some test.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (AVRO-2461) Add compression level support to avro-tools' fromjson and recodec

2019-07-23 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/AVRO-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated AVRO-2461:

   Resolution: Fixed
Fix Version/s: 1.10.0
   Status: Resolved  (was: Patch Available)

> Add compression level support to avro-tools' fromjson and recodec
> -
>
> Key: AVRO-2461
> URL: https://issues.apache.org/jira/browse/AVRO-2461
> Project: Apache Avro
>  Issue Type: Improvement
>  Components: java, tools
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Major
> Fix For: 1.10.0
>
>
> These commands have the {{--level}} option, but it's applied only to deflate 
> and xz.
> {code}
> $ java -jar avro-tools-1.9.0.jar fromjson
> Expected 1 arg: input_file
> Option  Description
> --  ---
> --codec Compression codec (default: null)
> --levelCompression level (only applies to deflate and xz)
>   (default: -1)
> {code}
> Zstandard also has compression level feature, so it'd be useful if users can 
> specify it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (AVRO-2461) Add compression level support to avro-tools' fromjson and recodec

2019-07-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890855#comment-16890855
 ] 

ASF subversion and git services commented on AVRO-2461:
---

Commit ad9614acebcf686a67f052ba80ca8eb872402066 in avro's branch 
refs/heads/master from Kengo Seki
[ https://gitbox.apache.org/repos/asf?p=avro.git;h=ad9614a ]

AVRO-2461: Add compression level support to fromjson and recodec (#576)



> Add compression level support to avro-tools' fromjson and recodec
> -
>
> Key: AVRO-2461
> URL: https://issues.apache.org/jira/browse/AVRO-2461
> Project: Apache Avro
>  Issue Type: Improvement
>  Components: java, tools
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Major
>
> These commands have the {{--level}} option, but it's applied only to deflate 
> and xz.
> {code}
> $ java -jar avro-tools-1.9.0.jar fromjson
> Expected 1 arg: input_file
> Option  Description
> --  ---
> --codec Compression codec (default: null)
> --levelCompression level (only applies to deflate and xz)
>   (default: -1)
> {code}
> Zstandard also has compression level feature, so it'd be useful if users can 
> specify it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (AVRO-2466) Fix a malformed schema in the share/test/schemas directory

2019-07-23 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/AVRO-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated AVRO-2466:

   Resolution: Fixed
Fix Version/s: 1.10.0
   Status: Resolved  (was: Patch Available)

> Fix a malformed schema in the share/test/schemas directory
> --
>
> Key: AVRO-2466
> URL: https://issues.apache.org/jira/browse/AVRO-2466
> Project: Apache Avro
>  Issue Type: Bug
>  Components: misc
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 1.10.0
>
>
> The schema defined in share/test/schemas/reserved.avsc is invalid because of 
> its trailing comma:
> {code}
> $ python -c 'from avro.schema import parse; 
> parse(open("../../share/test/schemas/reserved.avsc").read())'
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/sekikn/repo/avro/lang/py/src/avro/schema.py", line 976, in parse
> json_data = json.loads(json_string)
>   File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
> raise ValueError(errmsg("Extra data", s, end, len(s)))
> avro.schema.SchemaParseException: Error parsing JSON: {"name": 
> "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> , error = Extra data: line 2 column 39 - line 3 column 1 (char 96 - 98)
> $ cat ../../share/test/schemas/reserved.avsc
> {"name": "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> {code}
> This file doesn't seem to be used as far as I investigated, but I'd rather 
> fix this than remove since it might be useful for some test.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (AVRO-2466) Fix a malformed schema in the share/test/schemas directory

2019-07-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890850#comment-16890850
 ] 

ASF subversion and git services commented on AVRO-2466:
---

Commit e0ae0cd8a36520907304d52f4a19668a3218d5ea in avro's branch 
refs/heads/master from Kengo Seki
[ https://gitbox.apache.org/repos/asf?p=avro.git;h=e0ae0cd ]

AVRO-2466: Fix a malformed schema in the share/test/schemas directory (#578)



> Fix a malformed schema in the share/test/schemas directory
> --
>
> Key: AVRO-2466
> URL: https://issues.apache.org/jira/browse/AVRO-2466
> Project: Apache Avro
>  Issue Type: Bug
>  Components: misc
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>
> The schema defined in share/test/schemas/reserved.avsc is invalid because of 
> its trailing comma:
> {code}
> $ python -c 'from avro.schema import parse; 
> parse(open("../../share/test/schemas/reserved.avsc").read())'
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/sekikn/repo/avro/lang/py/src/avro/schema.py", line 976, in parse
> json_data = json.loads(json_string)
>   File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
> raise ValueError(errmsg("Extra data", s, end, len(s)))
> avro.schema.SchemaParseException: Error parsing JSON: {"name": 
> "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> , error = Extra data: line 2 column 39 - line 3 column 1 (char 96 - 98)
> $ cat ../../share/test/schemas/reserved.avsc
> {"name": "org.apache.avro.test.Reserved", "type": "enum",
>  "symbols": ["default","class","int"]},
> {code}
> This file doesn't seem to be used as far as I investigated, but I'd rather 
> fix this than remove since it might be useful for some test.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (AVRO-2445) Remove Python <2.7 compatibility

2019-07-23 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/AVRO-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890848#comment-16890848
 ] 

ASF subversion and git services commented on AVRO-2445:
---

Commit efe1181bc76adc89a69f0763ad469386db84ed67 in avro's branch 
refs/heads/master from Michael A. Smith
[ https://gitbox.apache.org/repos/asf?p=avro.git;h=efe1181 ]

AVRO-2445: Remove StoppableHTTPServer Polyfill (#567)



> Remove Python <2.7 compatibility
> 
>
> Key: AVRO-2445
> URL: https://issues.apache.org/jira/browse/AVRO-2445
> Project: Apache Avro
>  Issue Type: Task
>  Components: python
>Affects Versions: 1.9.1
>Reporter: Michael A. Smith
>Assignee: Michael A. Smith
>Priority: Major
> Fix For: 1.10.0
>
>
> I'll send out a separate note to the dev list about this, but I would like to 
> remove all the polyfills for python 2.4, 2.5 and 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)