Now wait a second...you have to change the user interface who's calling that stored procedure, so there's TWO spots right there that need changing...the UI call and the actual SP itself. Or how am I reading that wrong?

On 2016-03-10 13:53, Stephen Russell wrote:
Change is great because that is why we have a job.

If you have to fix select statements in your system I would only want to do it on the db and then adjust the receiver's as needed. It is simple to do it there or at least that is how I have been doing this for the last 18+ years. For any maintenance questions you only go to one SINGLE point of
failure.

YMMV

On Thu, Mar 10, 2016 at 12:10 PM, <
[email protected]> wrote:

I realize it's done lots of places, but I never wanted explicit stored
procedures for inserts/updates as they required update every time you
changed a structure.  That's too fragile/ridig a system for my liking.

I'm thinking it'll be a stored procedure for the purpose of inserting
something into a table and grabbing the @@IDENTITY value resulting from the insert. I realize that the number will grow large because for Table1's insert, I get a value of 1, and then for Table2's insert, I get the next value (2), etc. etc. etc. I don't mind that my entire collection of PKeys
is unique numbers.  I don't see this system ever hitting the maximum
threshold integer value.

So thus, it's similar to the classic Fox GetNextKey routine but instead of a row for each table, the Keys table is just handing out the next integer key created...and if it's not used (i.e., the user hits Cancel and doesn't
save his new data), no big deal.

Make sense?




On 2016-03-10 12:42, Stephen Russell wrote:

I understand how this could be a complex job and the first insert may only
contain 30% of the total rows known at this time.

I would consider making sprocs for inserts into each unique table that
returns when necessary the PKey of that insert.

jobInsert
itemInsert
detailsInsert
offshootsInsert

Also make:
jobSelect
itemSelect
detailsSelect
offshootsSelect


In some of my databases there are hundreds of sprocs, 400-500 in number.

jobAllAspects could have all of the joins needed to pull the entire beast into one dataset or all of the tables in independent returned datasets.
We
do a lot of the latter here at Ring.



On Thu, Mar 10, 2016 at 11:26 AM, <
[email protected]> wrote:

On 2016-03-10 10:55, Stephen Russell wrote:

"until I was absolutely sure I wanted to save the entire dataset."

That is exactly what we are talking about.  When user clicks save,
submit,
ok, button they are in save mode.  Then you commit header row(s)
retaining
the fkey(s) necessary for your transactional details.



Yes but until the user does the Save, I have to keep the relationship
hierarchy for primary keys and related foreign keys.

Example (where cID is the table's primary key):

1) Create Job (cID in Jobs cursor)
2) Create 1:M items (cID in Items cursor, with cJobID foreign key
pointing
back to Jobs table)
3) Create 1:M details about each item (cID in Details cursor, with
cItemID
foreign key pointing back to Items table)
4) Create some 1:M offshoots perhaps for each Detail (...you see the
trend...)

Rather than add all those records immediately to the database and later abandon because the dude hits "Cancel", I prefer to create my own keys
rather than rely on AutoIncrement to have full control like this.


[excessive quoting removed by server]

_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://mail.leafe.com/mailman/listinfo/profox
OT-free version of this list: http://mail.leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: 
http://leafe.com/archives/byMID/profox/[email protected]
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to