Thanks Medina! I'll give it a try
On Friday, August 11, 2017 at 1:19:50 AM UTC+2, Diego Medina wrote: > > Hope you had a goo time on vacation. > > No need for custom types, specially with csv files, all you get are > "string" type, but you know that some columns are actually float values, or > ints or plain text. So we have several functions that "clean, format, etc" > different kinds of data, and just call them with the value we get. For > example, we have: > > > func parseDateTimeOrZero(in string) time.Time { > > rtd := strings.TrimSpace(in) > loc, _ := time.LoadLocation("America/New_York") > > var t = time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC) > for _, format := range dateLayouts { > stamp, er := time.ParseInLocation(format, rtd, loc) > if er == nil { > t = stamp > break > } > } > return t > } > > dateLayouts is a slice of all the date time formats we support, we loop > over them until we get one that works, if not, we return the default value > (for our app this makes sense, in other cases we could return an error) > > > and while walking the csv, we just do: > > > if headers.versusDate != optionalErrorValue { > versusDate = parseDateTimeOrZero(record[headers.versusDate]) > } > > // optionalErrorValue defaults to a high enough number that if our column > matcher hasn't updated the key index, it means we skip it, (we then have a > diff rule for mandatory fields) > > so, instead of creating a new interface DateTimeColumn with a parse > function, it's just a stand alone function that is easy to understand, easy > to test. > > We have been evolving this part of the tool for years, we have had > different developers work on it, adding new parsing rules, editing > existing, making some of them more flexible, and it hasn't been an issue. > The thought process is, I have a value and I need to clean it, you check > the list of functions to see if there is anything that looks like it does > what you want, if not, add a new one vs, let's find an interface or type > that > may have the things we need, then let's initialize that struct and call > the method on it. > > BTW, when I first starting using Go, after working on scala for about 5 > years, it felt really wrong to have so many top level functions, but after > a while you see how there is nothing wrong with top level functions. > > Hope this helps. > > Thanks > > Diego > > > > On Thu, Aug 10, 2017 at 12:17 PM, Sofiane Cherchalli <sofi...@gmail.com > <javascript:>> wrote: > >> Hi Medina, >> >> Sorry I was on vacations. >> >> So do you mean the way to do it is to hardcode most of functionality. No >> need to use custom types, interfaces. Just plain text parsing? >> >> In that case, how easy is it to evolve or refactor the code? >> >> Thanks >> >> On Wednesday, July 26, 2017 at 8:36:15 PM UTC+2, Diego Medina wrote: >>> >>> I think we have a similar setup to what you are trying to do, we also >>> started with Scala and about 3 years ago we moved it to Go (still use Scala >>> for other parts of our app). >>> >>> While working in Scala and other languages you are encourage to abstract >>> things as much as you can, in Go it is often better to just address the >>> issues/requirements at hand and be clear on what you are doing. >>> In our case we define a struct that has the expected fields and types >>> for each column, and as we walk each row, we check that we get the expected >>> type, then it's a matter of cleaning/adjusting values as we need to, assign >>> the result of this cell to a variable and continue with the rest of the >>> cells on this row, once done, we initialize our struct and save it to the >>> database, move to the next row and repeat. >>> >>> Hope it helps. >>> >>> >>> On Wednesday, July 26, 2017 at 10:09:07 AM UTC-4, Sofiane Cherchalli >>> wrote: >>>> >>>> The schema is statically specified. The values always arrive in a >>>> defined order. Each value has a defined type. >>>> >>>> On Tuesday, July 25, 2017 at 3:01:14 AM UTC+2, rog wrote: >>>>> >>>>> On 24 July 2017 at 23:21, Sofiane Cherchalli <sofi...@gmail.com> >>>>> wrote: >>>>> >>>>>> Yes, I'm trying to stream CSV values encoded in strings. A schema >>>>>> defines a type of each value, so I have to parse values to verify they >>>>>> match the type. Once validation is done, I apply functions on each value. >>>>>> >>>>> >>>>> Is the schema dynamically or statically specified? That is, do you know >>>>> in advance what the schema is, or do are you required to write >>>>> general code that deals with many possible schemas? >>>>> >>>>> >>>>> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "golang-nuts" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/golang-nuts/b0UQrfolaiY/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> golang-nuts...@googlegroups.com <javascript:>. >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > Diego Medina > Go Consultant > di...@fmpwizard.com <javascript:> > https://blog.fmpwizard.com/ > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.