Hi Jeremy,
Thanks for looking into this. I have no idea where to even begin with
trying to parse that JSON code. It reminds me of something that Mat and Jed
were doing a while ago for the Twederation / Tiddlyverse
<https://groups.google.com/forum/#!searchin/tiddlywiki/Twederation/tiddlywiki/_v4CYU5Hx-Q/gGP3_9fBGAAJ>.
Maybe this will have some use to them. Maybe, if they find it useful and
write a parser for it, I could use the same in my semester project. Anyway,
I think for now this is way beyond my mere skills, and I'll stick to what
is working for me for now, even if it is a bit clunky.
Hegart.
On Monday, 4 April 2016 07:15:15 UTC+12, Jeremy Ruston wrote:
>
> Hi Hegart
>
> Professor Schneider at *{{DesignWrite}}* has recently shared a link to this
> blog post <https://ctrlq.org/code/20004-google-spreadsheets-json>, which
> suggests it is possible to access the information in a Google Sheet as a
> JSON file, by using jQuery. This looks very promising for my semester
> project, but I have no idea how to implement it, as I am a total n00b at
> Javascript.
>
>
> Interesting. I checked it out, creating a simple public spreadsheet. The
> sharing works as advertised, except that the JSON format used is pretty
> complex, but parseable.
>
> My sample spreadsheet looks like this:
>
>
> The resulting JSON is attached below. Weirdly, it doesn’t even place the
> cells of the spreadsheet in separate entries; they’re merged together with
> colons.
>
> Anyhow, I’m sure with a bit of research one could figure out the format
> and get something useful out of it, but it feels a bit Google-specific
> compared to the general utility of extending TW with the CSV import
> facilities you need,
>
> Best wishes
>
> Jeremy.
>
> {
> "version": "1.0",
> "encoding": "UTF-8",
> "feed": {
> "xmlns": "http:\/\/www.w3.org\/2005\/Atom",
> "xmlns$openSearch": "http:\/\/a9.com\/-\/spec\/opensearchrss\/1.0\/",
> "xmlns$gsx": "http:\/\/schemas.google.com
> \/spreadsheets\/2006\/extended",
> "id": {
> "$t": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic"
> },
> "updated": {
> "$t": "2016-04-03T19:02:55.541Z"
> },
> "category": [
> {
> "scheme": "http:\/\/schemas.google.com\/spreadsheets\/2006",
> "term": "http:\/\/schemas.google.com\/spreadsheets\/2006#list"
> }
> ],
> "title": {
> "type": "text",
> "$t": "Sheet1"
> },
> "link": [
> {
> "rel": "alternate",
> "type": "application\/atom+xml",
> "href": "https:\/\/docs.google.com
> \/spreadsheets\/d\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/pubhtml"
> },
> {
> "rel": "http:\/\/schemas.google.com\/g\/2005#feed",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic"
> },
> {
> "rel": "http:\/\/schemas.google.com\/g\/2005#post",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic"
> },
> {
> "rel": "self",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic?alt=json"
> }
> ],
> "author": [
> {
> "name": {
> "$t": "jeremy.ruston"
> },
> "email": {
> "$t": "[email protected] <javascript:>"
> }
> }
> ],
> "openSearch$totalResults": {
> "$t": "3"
> },
> "openSearch$startIndex": {
> "$t": "1"
> },
> "entry": [
> {
> "id": {
> "$t": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cokwr"
> },
> "updated": {
> "$t": "2016-04-03T19:02:55.541Z"
> },
> "category": [
> {
> "scheme": "http:\/\/schemas.google.com\/spreadsheets\/2006",
> "term": "http:\/\/schemas.google.com\/spreadsheets\/2006#list"
> }
> ],
> "title": {
> "type": "text",
> "$t": "Once I"
> },
> "content": {
> "type": "text",
> "$t": "two: caught a , three: fish, four: alive"
> },
> "link": [
> {
> "rel": "self",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cokwr"
> }
> ]
> },
> {
> "id": {
> "$t": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cpzh4"
> },
> "updated": {
> "$t": "2016-04-03T19:02:55.541Z"
> },
> "category": [
> {
> "scheme": "http:\/\/schemas.google.com\/spreadsheets\/2006",
> "term": "http:\/\/schemas.google.com\/spreadsheets\/2006#list"
> }
> ],
> "title": {
> "type": "text",
> "$t": "Six"
> },
> "content": {
> "type": "text",
> "$t": "two: Seven, three: Eight, four: Nine, five: Ten"
> },
> "link": [
> {
> "rel": "self",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cpzh4"
> }
> ]
> },
> {
> "id": {
> "$t": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cre1l"
> },
> "updated": {
> "$t": "2016-04-03T19:02:55.541Z"
> },
> "category": [
> {
> "scheme": "http:\/\/schemas.google.com\/spreadsheets\/2006",
> "term": "http:\/\/schemas.google.com\/spreadsheets\/2006#list"
> }
> ],
> "title": {
> "type": "text",
> "$t": "Then I"
> },
> "content": {
> "type": "text",
> "$t": "two: put it, three: in, four: again"
> },
> "link": [
> {
> "rel": "self",
> "type": "application\/atom+xml",
> "href": "https:\/\/spreadsheets.google.com
> \/feeds\/list\/1Uxon1ZBYVxTVlw5AhEFY8b6SMlrG_wAHt1mf1pXtEd4\/od6\/public\/basic\/cre1l"
> }
> ]
> }
> ]
> }
> }
>
>
>
> What I was thinking for this, is perhaps synchronising a set of tiddlers
> of type *JSON data* from my Google Sheets
> <http://bit.do/TiddlyCRM-sampledata>. Many of my worksheets in Google are
> already formatted in such a way that they will generate TiddlyWiki
> tiddlers, with just the relevant column headings as expected by TiddlyWiki,
> and rows for each tiddler. I currently export these manually to CSV, then
> convert them to JSON. This new idea would save on those extra steps. Once I
> have them as *JSON data* tiddlers full of JSON tiddlers, I should then be
> able to bulk import the tiddlers into my wiki space from the *JSON data*
> tiddlers fairly easily.
>
> This solution would be used initially to replace my cumbersome procedure
> for working with the SampleData which I'm generating for testing the
> TiddlyCRM project. This sample data gets imported and purged often.
> However, it may also have application later for the live data, if the end
> user wants to bulk-import their legacy information into TiddlyCRM.
>
> Any advice on how to implement this would be greatly appreciated.
>
> Kind regards,
>
> Hegart.
>
> --
> You received this message because you are subscribed to the Google Groups
> "TiddlyWiki" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected]
> <javascript:>.
> Visit this group at https://groups.google.com/group/tiddlywiki.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/tiddlywiki/fe13e647-fc37-4e68-8ec4-9be2ddc11353%40googlegroups.com
>
> <https://groups.google.com/d/msgid/tiddlywiki/fe13e647-fc37-4e68-8ec4-9be2ddc11353%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>
--
You received this message because you are subscribed to the Google Groups
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/tiddlywiki.
To view this discussion on the web visit
https://groups.google.com/d/msgid/tiddlywiki/bc2d2f9a-c2b8-49cf-a58a-b7a53781036c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.