[
https://issues.apache.org/jira/browse/HBASE-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12663443#action_12663443
]
Brian Beggs commented on HBASE-1064:
------------------------------------
I'm working on a patch for this issue along with a few others I have found.
A few questions..
When updating a row, with the current xml structure:
<column>
<name>other:</name>
<value>test5</value>
</column>
you can only update one column at a time. If that's the way it should be kept
fine, but I can change it to allow multiple columns in a row all be updated at
once by adding a new root element to the xml.
Should the input values be base 64 encoded as well?
Also for the JSON implementation do we want the values there base 64 encoded as
well? And same question for the JSON input, should that also be base64
encoded. It may be preferable to not base64 encode the json as base64
encode/decode is not available natively in the language.
> HBase REST xml/json improvements
> --------------------------------
>
> Key: HBASE-1064
> URL: https://issues.apache.org/jira/browse/HBASE-1064
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: rest
> Reporter: Brian Beggs
> Attachments: hbase-1064-patch-v2.patch, hbase-1064-patch-v3.patch,
> json2.jar, REST-Upgrade-Notes.txt, RESTPatch-pass1.patch
>
>
> I've begun work on creating a REST based interface for HBase that can use
> both JSON and XML and would be extensible enough to add new formats down the
> road. I'm at a point with this where I would like to submit it for review
> and to get feedback as I continue to work towards new features.
> Attached to this issue you will find the patch for the changes to this point
> along with a necessary jar file for the JSON serialization. Also below you
> will find my notes on how to use what is finished with the interface to this
> point.
> This patch is based off of jira issues:
> HBASE-814 and HBASE-815
> I am interested on gaining feedback on:
> -what you guys think works
> -what doesn't work for the project
> -anything that may need to be added
> -code style
> -anything else...
> Finished components:
> -framework around parsing json/xml input
> -framework around serialzing xml/json output
> -changes to exception handing
> -changes to the response object to better handle the serializing of output
> data
> -table CRUD calls
> -Full table fetching
> -creating/fetching scanners
> TODO:
> -fix up the filtering with scanners
> -row insert/delete operations
> -individual row fetching
> -cell fetching interface
> -scanner use interface
> Here are the wiki(ish) notes for what is done to this point:
> REST Service for HBASE Notes:
> GET /
> -retrieves a list of all the tables with their meta data in HBase
> curl -v -H "Accept: text/xml" -X GET -T - http://localhost:60050/
> curl -v -H "Accept: application/json" -X GET -T - http://localhost:60050/
> POST /
> -Create a table
> curl -H "Content-Type: text/xml" -H "Accept: text/xml" -v -X POST -T -
> http://localhost:60050/newTable
> <table>
> <name>test14</name>
> <columnfamilies>
> <columnfamily>
> <name>subscription</name>
> <max-versions>2</max-versions>
> <compression>NONE</compression>
> <in-memory>false</in-memory>
> <block-cache>true</block-cache>
> </columnfamily>
> </columnfamilies>
> </table>
> Response:
> <status><code>200</code><message>success</message></status>
> JSON:
> curl -H "Content-Type: application/json" -H "Accept: application/json" -v -X
> POST -T - http://localhost:60050/newTable
> {"name":"test5", "column_families":[{
> "name":"columnfam1",
> "bloomfilter":true,
> "time_to_live":10,
> "in_memory":false,
> "max_versions":2,
> "compression":"",
> "max_value_length":50,
> "block_cache_enabled":true
> }
> ]}
> *NOTE* this is an enum defined in class HColumnDescriptor.CompressionType
> GET /[table_name]
> -returns all records for the table
> curl -v -H "Accept: text/xml" -X GET -T - http://localhost:60050/tablename
> curl -v -H "Accept: application/json" -X GET -T -
> http://localhost:60050/tablename
> GET /[table_name]
> -Parameter Action
> metadata - returns the metadata for this table.
> regions - returns the regions for this table
> curl -v -H "Accept: text/xml" -X GET -T -
> http://localhost:60050/pricing1?action=metadata
> Update Table
> PUT /[table_name]
> -updates a table
> curl -v -H "Content-Type: text/xml" -H "Accept: text/xml" -X PUT -T -
> http://localhost:60050/pricing1
> <columnfamilies>
> <columnfamily>
> <name>subscription</name>
> <max-versions>3</max-versions>
> <compression>NONE</compression>
> <in-memory>false</in-memory>
> <block-cache>true</block-cache>
> </columnfamily>
> <columnfamily>
> <name>subscription1</name>
> <max-versions>3</max-versions>
> <compression>NONE</compression>
> <in-memory>false</in-memory>
> <block-cache>true</block-cache>
> </columnfamily>
> </columnfamilies>
> curl -v -H "Content-Type: application/json" -H "Accept: application/json" -X
> PUT -T - http://localhost:60050/pricing1
> {"column_families":[{
> "name":"columnfam1",
> "bloomfilter":true,
> "time_to_live":10,
> "in_memory":false,
> "max_versions":2,
> "compression":"",
> "max_value_length":50,
> "block_cache_enabled":true
> },
> {
> "name":"columnfam2",
> "bloomfilter":true,
> "time_to_live":10,
> "in_memory":false,
> "max_versions":2,
> "compression":"",
> "max_value_length":50,
> "block_cache_enabled":true
> }
> ]}
> Delete Table
> curl -v -H "Content-Type: text/xml" -H "Accept: text/xml" -X DELETE -T -
> http://localhost:60050/TEST16
> creating a scanner
> curl -v -H "Content-Type: application/json" -H "Accept: application/json" -X
> POST -T - http://localhost:60050/TEST16?action=newscanner
> //TODO fix up the scanner filters.
> response:
> xml:
> <scanner>
> <id>
> 2
> </id>
> </scanner>
> json:
> {"id":1}
> Using a scanner
> curl -v -H "Content-Type: application/json" -H "Accept: application/json" -X
> POST -T -
> "http://localhost:60050/TEST16?action=scan&scannerId=<scannerID>&numrows=<num
> rows to return>"
> This would be my first submission to an open source project of this size, so
> please, give it to me rough. =)
> Thanks.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.