ive toyed around with just using txt files but my limited
understanding of "proper technique" in dealing with them makes them
just as cumbersome...

im very familiar with normalization and if it was practical (and the
cost didnt outweigh the benefit) id make sure everything was
absolutely 6NF and then some... but coulda, woulda, shoulda... its not
practical.. the best im shooting for is 3NF or 4NF but its not a
stringent requirement...

i guess you could say i know my way around databases, im just lost
with trying to implement this in a ruby way. my database breakdown
will probably look as follows (i think, unless someone can point me in
a better direction)...

over time there may be 5000 sheets... each sheet may have up to 20
columns. each column will eventually belong to exactly one group. each
group may have up to 400 "rows"... .. so if a sheet has 4 columns and
2 groups like my prev. example and is filled to capacity... theres
going to be 400 rows for each set of groups... 800 rows... they need
to then be translated into one cohesive unit for display. the final
display will have all 4 columns separated into groups and "merged" so
all the "toolnumbers" line up in rows.. displaying only 400 rows.

i **think** i understand the database side.. im lost on the ruby
implementation (or any implementation).. is there a "most effective"
way to construct my relationships?

Sheets
- id (int)
- name (string)

Columns
- id (int)
- sheet_id (int)
- column_group_id (int)
- name (string)

ColumnGroups
- id (int)
- name (string)

Data
- id (int)
- sheet_id (int)
- column_id (int)
- tool_number (string)
- value (int)

then i'll have a possible array as such for a query like:
select tool_number, value from data where sheet_id = x

whats an effective way to iterate over the returned dataset and sort
it out into its corresponding columns column groups and rows... im
seeing a join in my head but i dont know on what.

:(

hopefully my problem is becoming a little more clear... but the deeper
i dig the more i suspect theres an elegent solution im not advanced
enough to see.

On Feb 10, 8:59 am, Randy Kramer <[email protected]> wrote:
> On Tuesday 10 February 2009 08:31 am, [email protected] wrote:
>
> > What I'm really looking for is a technical explaination of the
> correct/
> > incorrect way to acheive this... I'm sure it's a problem that someone,
> > somewhere had to solve once before and I've been trying to reinvent
> > it, as i said - with separate tables for the columns, column groups,
> > rows, tables... but in the end - merging all the tables together and
> > iterating over everything just seems to take forever... not in the
> > least bit efficient or reliable.
>
> Well, I sort of stand by my original response then.  I mean, when you
> have an unnormalized relational database and responses are too slow,
> the typical recommendation (I think) is to normalize the database.  I
> won't try to explain that here, you need to look it up.  (Maybe someone
> can explain it (and how to do it) simply, but I can't, at least not at
> this time.)
>
> Normalizing the database is not the only way forward however, and I'd
> ask how much data you have.  For my only (in progress) application, a
> relational database is just not the right fit, and in general slows
> everything down (in comparison to plain text files and "ordinary" (and
> indexed) searches).  (My application has plain text files with
> (currently) up to 5000 variable length records per file, totalling on
> the order of 10,000,000 characters per file.  At them moment I have
> about 12 such files, although only two are that big.  I plan to scale
> to files as big as 100,000,000 characters without switching to a
> relational data base (which I'm sure would slow down my application).
>
> How much data will you have in this(these) table(s)?  For a low quantity
> of data, maybe even a spreadsheet "technology" would do the job?
>
> Other solutions, like a separate machine (server) to handle the database
> could help as well.  I guess someone would need more information,
> specifically about the quantity of data involved (now and in the
> future).
>
> Randy Kramer
> --
> I didn't have time to write a short letter, so I created a video
> instead.--with apologies to Cicero, et.al.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Ruby 
on Rails: Talk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-talk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to