I have been giving this some thought, but need to know the exact structure 
of the Json output file you would be importing into tiddlywiki. Is it 
similar to the below?

[
fieldNames,
row1,
row2,
row3
]

If so we can template tiddler-field-names from row0, and then split each 
row to it's own tiddler. From tehre the native Filter, List, etc widgets 
will build your UI.

The problem here is if each row is one string (wrapped in it's own quotes), 
or a number of strings (each wrapped in quotes, at which point this is not 
a "flat" 1 level deep JSON). OR is it coming out of the DB in some other 
format?

The other problem being mentioned upthread: Embedded quotation marks (etc) 
in the data. The best thing to do would be to export all fields from the DB 
where this occurs as "HTML encoded text". Then you can use the `decodehtml` 
filter operator on each of those text blocks before displaying it. See: 
https://tiddlywiki.com/#decodehtml%20Operator

Best,
Joshua F

On Tuesday, January 7, 2020 at 1:29:41 PM UTC-8, springer wrote:
>
> Joshua, yes trying the new transclusion at your site yields a string 
> ("Eqa2nAAhHN0") -- is that what videold is supposed to look like?
>
> For the main task I'm interested in, there's no structure beyond a flat 
> table with fields (almost uniformly populated) as follows:
>
> qID:
> source:
> page:
> pageMod:
> excerpt:
> tagline:
> session:
>
> The way the data gets used in my TW5 instance is that I'd filter by 
> "session" (24 sessions) and get all the associated excerpts displayed 
> inside sliders (or using details widget or some such) so that one line with 
> key info, per excerpt, is always visible (as "summary"), but the bulk of 
> each paragraph-worth of text is hidden until needed.
>
> The original database's excerpt field does have all sorts of troublesome 
> stuff (quotes, brackets, occasional paragraph breaks), and the source field 
> may have quote marks, colons, etc. as well. Even if I deploy a substitution 
> trick for the paragraph breaks, it's definitely too complex to work easily 
> as CSV, given the gazillions of commas and quote marks. 
>
> Given this complexity, does it sound like I should just stick to the low 
> road and do the dressing-up within my database prior to a copy-paste 
> process? (It's only 24 copy-paste operations to get the sets I need. I'd 
> lose the flexbility to adjust details on the fly, but that's of marginal 
> value if converting into JSON, plus reconstructing punctuation etc as TW5 
> peeks into the JSON, turns out to be an ordeal.) 
>
> -Springer
>
> On Tuesday, January 7, 2020 at 1:53:08 PM UTC-5, Joshua Fontany wrote:
>>
>> Yes, currently that is a non-trivial task, as there really is no 
>> "standard Json format", so all importing o far from database-structures has 
>> been ad-hoc work (I believe Evan Balster has a good CSV importing workflow).
>>
>> Also, I apologize for my example wiki-code. It was late and I messed up 
>> both the path _and_ my seperator character (should have been 'slash' not 
>> 'backslash'). Sheesh, good one Josh (lol).
>>
>> The working transclusion would be: 
>> {{Test/YouTubeAPI.json##/items/1/id/videoId}}
>>
>> Moving on to your query, would your output data have a standard set of 
>> fields, and what kind of json structure would there be? Once we have that 
>> nailed down a bit, I can suggest various Filter tricks. 
>>
>> Best,
>>
>> Joshua Fontany
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tiddlywiki/632da5ac-7610-453c-a55c-0be6015df4b8%40googlegroups.com.

Reply via email to