Hi Nick!.
Allow to me provide you with some insights about working with a sql DBMS as 
persistence layer for an app.

   - *Json.Marshal/Unmarshal functions works like a charm* with columns of 
   type JSON. As you state once you have a table with one column of then you 
   can store anything on in, so the whatever data type you use in you golang 
   app it will "fit" into that column, with the "help" of the magical 
   functions of Json.Marsha, and you will "parse" it using Unmarshal
   - Pros: No need to update table schema if the type value in your 
      application changes.
      - Cons: Not a good idea if you are planning to lookup a row for a 
      value stored into that json.
   - *Focus on modeling first, and on technology later*. I uses to work 
   with Hibernate in Java. I really appreciate the efforts of the communitiy 
   to try to make the life easier regarding object relational mappig, but in 
   the end, when you need to build your aggregations (those complex objects 
   that are maded of more granular ones), plain SQL provess to be more staight 
   foward and simple for me, as Eric says somettimes the "relationships are 
   much more important than objects" and it that cases the ORM may not provide 
   an straight solution. The real thing in object relational mapping, is not 
   only the ORM behind, is also, and more important (to me), how you model the 
   problem's domain entities. I mean, modelling your data base schema is like 
   putting your starting building blocks for an slyscraper, from there you 
   create on top of it, thus if you fail to properly understand a domain's 
   problem in its very granular terms (the true nature of things), you 
   probably will have a lot of additional code regardless of the abstractions 
   you use to communitcate with the db. 
   - *If you chosse to work with plain SQL,* i may have some good proved 
   recipes that work for me, that glady i will share with you.
   - 
   - *(A)  *I find the operation of scanning a row regardless of the query 
   that they may come, to "have a common behaviour", that i encapsulated in 
   some convenient abstractions. See the code below
   - 
   - 
   
   // Scans a single row from a given query
   type RowScanFunc func(rows scaner) (model.Scanable, error)
   // Scans multiples rows using a scanner function
   func ScanMultiples(rows *sql.Rows, rowScanFunc RowScanFunc) 
   (interface{}, error) {
       records:= []interface{}
       for rows.Next() {
           record, err := rowScanFunc(rows)
           if record== nil {
               return nil, err
           }
           records= append(records, record)
       }
       err := rows.Err()
       if err != nil {
           return nil, err
       }
       return objects , nil
   }
   
   So, for example, you could use like this:
   - 
   // sql persistence layer
   CREATE TABLE bank_account
   (
     id serial NOT NULL,
     type text NOT NULL,
     balance int NOT NULL,
     CONSTRAINT bank_account_pkey PRIMARY KEY (id)
   )
   
   // golang app
   type BankAccount struct {
   Id      int                `json:"id"`
   Type    int                `json:"balance"`
   Balance int                `json:"balance"`            // btw: i don't 
   use floats for currency values, they can not be trust.
   Details BankAccountDetails `json:"bankAccountDetails"` // asumme that 
   bank account details is another struct declared elsewhere
   }
   
   // Gets all the accounts that correspondt to the given list of ids
   func GetAllBankAccounts(accountsIds ...int) ([]interface{}, error) {
   rows, err := Dba.Query(`
   SELECT c.id, c.type, a.balance, c.details
   FROM accounts c
   WHERE c.id = ANY($1)`, pq.Array(accountsIds))
   if err != nil {
   return nil, err
   }
   defer rows.Close()
   return ScanMultiples(rows, scanBankAccount)
   }
   
   // Scan a single row interpreting it as a "BankAccount" object type.
   func scanBankAccount() (interface{}, error) {
   var ba BankAccount
   var details []byte
   err := rows.Scan(&ba.Id, &ba.Type, &ba.Balance, &details)
   if err != nil {
   return nil, err
   }
   err = json.Unmarshal(details, &ba.Details) // <-- the magic we talk 
   about! nice!
   if err != nil {
   logger.Errorf("error unmarshalling bank account details(='%s'), error 
   was: '%v'", string(details), err)
   }
   return &b, nil
   }
   - *For dealing with transactions* i do find this snippet very very 
   useful (taken and adapted from: https://stackoverflow.com/a/23502629/903998)
   - 
   
   type TransactionFunc = func(*sql.Tx) (interface{}, error)
   
   func ExecuteTransactionFunc(db *sql.DB, txFunc TransactionFunc) (result 
   interface{}, err error) {
       tx, err := db.Begin()
       if err != nil {
           return
       }
       defer func() {
           if p := recover(); p != nil {
               tx.Rollback()
               panic(p) // re-throw panic after Rollback
           } else if err != nil {
               tx.Rollback() // err is non-nil; don't change it
           } else {
               err = tx.Commit() // err is nil; if Commit returns error 
   update err
           }
       }()
       result, err = txFunc(tx)
       return
   } 
   
   And you could employ that code in scenarios like below: 
   - 
   - 
   - func TransferMoney (fromBankAccountId int, toBackAccountId int, amount 
   int) (interface{}, error) {
   anonymousTransactionFunc := func(tx *sql.Tx) (interface{}, error) { // 
   a.k.a. "clousure"
   return doTransferMoney(tx, session, centersRoles)
   }
   return ExecuteTransactionFunc(Db,anonymousTransactionFunc) // assuming 
   Db contains an actual connection to an underlying sql DB
   }
   
   
   func doTransferMoney(tx *sql.Tx fromBankAccountId int, toBackAccountId 
   int, amount int)  (result interface{}, err error) {
   err := withdrawMoneyFrom(tx, fromBankAccountId, amount)
   if err != nil {
   return
   }
   err = putMoneyInto(tx, toBankAccount, amount)
   if err != nil {
   return
   }
   result = struct{}{} // here you build your answer accordenly to the 
   system well defined bussiness rules
   return
   }
   

Well i guess that is a LOT for a start. Forgive me if i may overwhelm a 
little bit with so much large response, but you ask for it! and i just try 
to share with you what i have learned and worked for me!.
Hope it really helps, and, of course, i will be eager to read any feedback 
regarding good practices working with SQL.
Greetings
*V* 

P.D: PLease forgive me if the code has errors, it was not tested, only for 
describing examples and improving overall comprehension.

El jueves, 20 de diciembre de 2018, 16:19:50 (UTC-3), er...@tibco.com 
escribió:
>
>
>
> On Thursday, December 20, 2018 at 7:23:22 AM UTC-8, Nick wrote:
>>
>> Hi all, 
>>
>> I'm newish to Go, and I've been really enjoying using it. I'm still 
>> getting to grips with the best practices of the language. 
>>
>> Up until now, I've used MarshalJSON methods on complex struct types 
>> to easily save the state of data, using Marshal() from 
>> encoding/json.  This has worked really well, and I have started to 
>> get the simple power of interfaces with how easy it was to do. 
>>
>> I've got to the stage that it makes more sense to use SQL than JSON 
>> for my data now, and I was thinking it would make most sense to have 
>> a similar process to read and write SQL, using methods on my struct 
>> data types, which can then be called by functions that take an 
>> interface implementing those methods. 
>>
>
> What you're describing roughly fits into the category of an 
> "object-relational mapping". Go has a number of solutions for that. One 
> that pops to mind is GORM.
>  
>
>>
>> I'm wondering if anybody else has done anything similar (I presume 
>> so!), and could point me to examples of ways people have done this.   
>> Any other thoughts on the best ways to organise my code to do this 
>> cleanly and reusably would be very welcome indeed - as I say, I'm 
>> new enough to Go that maybe I'm still thinking about this all wrong! 
>>
>
> There are two directions I've seen recommended in the Go community. Either 
> use ORM, or sidestep it completely, and just implement the SQL directly.
>
> I've found, for the places where I need to use SQL, that relationships are 
> much more important than objects. So efficient interaction with the 
> database ends up working best by simply directly writing SQL code, 
> querying, and then scanning the results. ORMs would get in the way, because 
> they would then require multiple ORM-level interactions to answer specific 
> query. There's a little bit of repetitiveness in the non-ORM approach, but 
> typically by the time I'm done, not so much that I'm concerned.
>
> Unlike with Java, where introducing a new class of object requires a new 
> source file, with the tailored query approach I can create a structure that 
> is directly fit for purpose, and it is just a few lines of additional code 
> in an existing file. This means my database layer can easily get exactly 
> the columns I need, from exactly the tables I need, and map that into 
> exactly the structure that I need.
>
> A package like sqlx <https://github.com/jmoiron/sqlx> might even simplify 
> your work even further.
>
> Eric.
>  
>
>>
>> Thanks in advance, 
>>
>> Nick 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to