hi, Simon.

In short, yes...you can get jq to convert both ways.

It's not exactly as simple as just piping it through jq though, just to
reiterate my earlier self-correction. JSON is intended to allow rich data
definition, such that there's no quick fix that would suit all parties; in
my own case, I had to drill down to one object that had at least four
alternate formats in the same file. Itrequires some factory-style handling
to abstract and manage that deduction anyone might need individually. Same
concern would apply for any embedded .mode handling to produce JSON through
SQLite; nothing would ever be lightweight AND comprehensive for all uses.

I'd created a local function that does what I need when forcing JSON down
to @csv format through jq. It does what I need (ie, crushing "plump" JSON
output down to a table-ready format, including BLOB fields, at a specific
path level), but it'd need some tweaking to handle depth and even separator
preferences per individual use case.

Assuming some abstraction planning is done, the same can be done in either
direction.

I'd posted a link to the jq Developer's own cookbook, but the following is
a link that actually uses a similar approach where you parse the header
rows to produce a template-based JSON output:

   http://infiniteundo.com/post/99336704013/convert-csv-to-json-with-jq ​​

​Sample function for using jq to make @csv
​ to get JSON into SQLite​
: ​


​   ###########################
   #  function jq_csv ...
   #     -  This function does a quick fix of .json files for import use...
   #
function jq_csv () {

#set -x

   #  A little help here...
   if [ $# -eq 0 ] \
         || [ "${1}" == "-h" ]; then
         #  Usage: script_name [-h|-v] filename
         #
      cat <<-ENDOFHELP

    jq_csv  requires 1-2 args:

         jq_csv  [${1:-" -h | -v "}] [${2:-"filename"}]

      Examples of use:

         jq_csv  -h ...       [this help listing]
         jq_csv  -v filename  [turns on xtrace]
         jq_csv     filename  [cleans csv for loading]

ENDOFHELP

      return 0

   fi

   if [ "${1}" == "-v" ]; then
   #  Turn on xtrace...we'll kill it on our way out
      set -x
   fi

   #  There's a path length 'feature' in jq, which presents as a
segmentation error.
      #  Switch to target file's directory, run from there...
      #  ...and switch back. We could use cd -, but let's be explicit...
   typeset my_path=${PWD}
   #  Pure laziness...not going w optargs just yet.
   my_file="${2:-${1}}"
#   not_template="Error: file is not a csv-delimited document"  2>&1

   if [ -e ${my_file} ] ;then
      #  Invalid first argument...
         #  STFU
      typeset jq_path=$(dirname ${my_file} )
      #  Guard against undefined options...
      if [ -z $(echo "${1}" |egrep "\-v|\-h" ) ] \
         && [ "$(echo ${1} |cut -b 1 )" == "-" ]; then
            printf "\n    Error:  %s \n" "Invalid arg1... "
            return
      #  File exists, but...it's either zero byte or not an appropriate
file...
         #  GTFO
      elif [ -z "$(head ${my_file} |egrep '\,|\"\,|\,\"' )" ] ;then
            printf "\n    Error:  %s \n" "${not_template}"
            return
      fi
      #  Hey, look...a real file!
         #  A little gymnastics to establish our column headers...
            #  Have jq generate the keys into a CSV for us,
            #  and then take only the 2nd field onward. (Since {metadata}
isn't useful...)
         typeset my_hdrs=$(cd ${jq_path}                          \
                          && jq -rc                               \
                              '.d.results[] |keys_unsorted |@csv' \
                              $(basename ${my_file} )             \
                          |awk -F',' '!uniq[substr($0,1,length($0))]++;'
2>&1 \
                          |cut -d, -f2-                           \
                          && cd ${my_path}                        \
                           )
         #  Do NOT mess with the output here, as it produces a literal for
the upcoming call to jq.
         #
            #  Reformulate the headers into jq syntax to be sent back for
retrieval...
         typeset to_flds=$(echo "${my_hdrs}"                      \
                          |sed 's/^"/[ ./g;s/,"/, ./g;s/"//g;s/$/ |tostring
]/g; ' \
                           )
         cd ${jq_path}
         #  First the headers...then the fields.
            #  But we'll pass them through additional parsing to make them
|-delimited.
         (echo ${my_hdrs}                                \
            && jq -rc ".d.results[] |${to_flds} |@csv "  \
            $(basename ${my_file} ) 2>&1 )               \
            |sed 's/","/"|"/g;'              #\
         cd ${my_path}

   elif [ ! -e ${my_file} ]; then
      printf "\n    Error:  %s \n"  "No input file found..."
   fi

   #  Turn off debug/verbose mode...IF it was us that did it...
   if [ "${1}" == "-v" ] \
      && [ ! -z "$(set |grep xtrace )" ]; then
         set +x
   fi

}

​
Regards.

Brian P Curley



On Sun, Jan 21, 2018 at 11:21 AM, Simon Slavin <[email protected]> wrote:

>
>
> On 21 Jan 2018, at 3:05pm, Brian Curley <[email protected]> wrote:
>
> > pipe it
> > through jq instead.
>
> I did not know jq existed.  Thanks.  Just gave the documentation a quick
> glance.
>
> jq is not installed on my platform (macOS) whereas sqlite3 is.
>
> Does jq do conversion both ways ?
>
> Can jq deduce the column names (SQL) / keys (JSON) from the output of
> SELECT without extra work from the programmer/scripter ?
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to