Sorry for the late reply, only just seen this thread. As Stephen Price said
back in September, I have been using CodeSmith to generate my stored procs
and DAL for well over a decade. Our templates have changed as language
improvements and new product features have come along, but the underlying
principles and methodology remain the same.

We generate the boilerplate code which is based on patterns we've used and
updated over many years, and which have proven themselves repeatedly on
projects of all different sizes. These produce stored procs  and the
associated C# objects that are simple, predictable, easy to follow, and
fast. This code just does simple CRUD operations, nothing more. It can then
put all the code inside generated .sln and .csproj files, writes some basic
tests for the work it's done, and so once you've designed your schema you
can have a ready to go VS project in just a few minutes. For web apps,
we've written templated asp.net pages as well (so I guess we had our own
"scaffolding" long before the term became popular), which we constantly
update as technology evolves. We don't need to wait for EF to catch up with
the latest developments in SQL Server, for example, we just roll our own.
If we get it wrong, we can easily change it.

All this provides us with the skeleton of an app, allowing us to
concentrate our efforts into we then modifying the parts of it that need
changing to suit the requirements of the particular job at hand. Typically
our generated object design is a close match to the table designs, and this
doesn't work in all scenarios, so changes are necessary for this and a
variety of other reasons depending on the task at hand (this is the
"effort" side of the equation that David refers to above).

I've looked at EF, and a few other ORMs, and I have not found a single
solitary advantage to changing, other than to become one of the cool kids
using the latest whatzit technology. I prefer to concentrate on building
things rather than learning new technologies that offer me no benefit,
which can be a nightmare to debug, and which can be taken away at any time.
What will happen if MS does to EF what they did to Silverlight, for
example? What's going to happen to all those folks who invested heavily in
Telerik's now-discontinued ORM?

My take on all this is that you need to find a data access method that you
are comfortable with, either home grown, or open source, and stick to it. I
know it's not as simple as that when you have an employer who demands that
you know such and such tech, but that's our experience.


On 17 September 2016 at 11:52, Stephen Price <step...@lythixdesigns.com>
wrote:

> Awesome thread guys.
>
>
> Just to throw in some of my comments and views.
>
>
> For Greg Low: Partly jokingly, I thought you would have loved the use of
> EF. From a business perspective, no shortage of projects you can fly into
> at the last moment, and save the damsel in distress. Wave your trusty SQL
> cape, collect big pay check and fly off to the next project needing saving.
>
>
> Myself, being a developer, love being able to code linq style against the
> EF database context and not have to break out the SQL. I've always told
> myself my T-SQL is a weak point, but this week I could arguably discredit
> my opinion on myself. I had some code that generated a magic number on the
> end of a number in C#. Realised I needed to run it from another codebase
> which shares the database so chose to write the same function in t-sql. It
> worked but I doubted myself too much (ie will it work under all
> conditions?). Got another guy to code review it and he said it looked fine.
> I ended up simplifying it and then discovered that the EF reverse code gen
> tool we are using (which generates our EF code from the database) doesn't
> gen EF code for SQL Functions.
>
> So now I realise as I type this that we are using a tool to generate
> (some) EF code from our database so that EF can generate our database.
> Poised to disappear up my own recursive arse any moment here!
>
> My usual way of optimising EF is to use dtos.  Be as explicit as possible
> when querying for what I want. Not ideal, but this conversation thread has
> taken me back to when I was working with Grant Maw (hi Grant! Hes on
> holidays In Hawaii right now so probably not reading along) where he uses
> CodeSmith to magically gen his boilerplate ADO.net layers from his
> databases. He's probably still enjoying his stubborn refusal to use an ORM
> as we speak.
>
> New and shiney is not always the best thing down the track.
>
> It's opened my eyes. Perhaps time to renew my CodeSmith and see how far
> their out of the box templates have come.
>
> As David pointed out good work requires effort. We are craftsmen. We need
> to find the balance between doing good work with our tools and having our
> tools do good work.
>
> Enjoy your weekend all!
> ------------------------------
> *From:* ozdotnet-boun...@ozdotnet.com <ozdotnet-boun...@ozdotnet.com> on
> behalf of Greg Low (罗格雷格博士) <g...@greglow.com>
> *Sent:* Saturday, 17 September 2016 9:03:49 AM
> *To:* ozDotNet
> *Subject:* RE: Entity Framework - the lay of the land
>
>
> Hey Dave and all,
>
>
>
> “The great” -> hardly but thanks Dave.
>
>
>
> Look, my issues with many of these ORMs are many. Unfortunately, I spend
> my life on the back end of trying to deal with the messes involved. The
> following are the key issues that I see:
>
>
>
> *Potentially horrid performance*
>
>
>
> I’ve been on the back end of this all the time. There are several reasons.
> One is that the frameworks generate horrid code to start with, the second
> is that they are typically quite resistant to improvement, the third is
> that they tend to encourage processing with far too much data movement.
>
>
>
> I regularly end up in software houses with major issues that they don’t
> know how to solve. As an example, I was at a start-up software house
> recently. They had had a team of 10 developers building an application for
> the last four years. The business owner said that if it would support 1000
> concurrent users, they would have a viable business. 5000 would make a good
> business. 500 they might survive. They had their first serious performance
> test two weeks before they had to show the investors. It fell over with 9
> concurrent users. The management (and in this case the devs too) were
> panic-stricken.
>
>
>
> Another recent example was a software house that had to deliver an app to
> a government department. They were already 4 weeks overdue and couldn’t get
> it out of UAT. They wanted a silver bullet. That’s not the point to then be
> discussing their architectural decisions yet they were the issue.
>
>
>
> I was in a large financial in Sydney a while back. They were in the middle
> of removing the ORM that they’d chosen out of their app because try as they
> might, they couldn’t get anywhere near the required performance numbers.
> Why had they called me in? Because before they wrote off 8 months’ work for
> 240 developers, the management wanted another opinion.
>
>
>
> Just yesterday I was working on a background processing job that processes
> a certain type of share trades in a UK-based financial service
> organisation. On a system with 48 processors, and 1.2 TB of memory, and 7 x
> 1 million UK pound 20TB flash drive arrays, it ran for 48 minutes. During
> that time, it issued 550 million SQL batches to be processed and almost
> nothing else would work well on the machine at the same time. The
> replacement job that we wrote in T-SQL issued 40,000 SQL batches and ran in
> just over 2 minutes. I think I can get that to 1/10 of that with further
> work. Guess which version is likely to get used now?
>
>
>
> *Minimal savings yet long term pain*
>
>
>
> Many of the ORMs give you an initial boost to “getting something done”.
> But at what cost? At best, on most projects that I see, it might save 10%
> of the original development time, on the first project. But as David
> pointed out in his excellent TechEd talk with Joel (and as I’ve seen from
> so many other places), the initial development cost of a project is usually
> only around 10% of the overall development cost. So what are we talking
> about? Perhaps 1% of the whole project? Putting yourself into a long-term
> restrictive straightjacket situation for the sake of a 1% saving is a big,
> big call. The problem is that it’s being decided by someone who isn’t
> looking at the lifetime cost, and often 90% of that lifetime cost comes out
> of someone else’s bucket.
>
>
>
> *Getting stuck in how it works*
>
>
>
> For years, code generated by tools like Linq to SQL was very poor. And it
> knew it was talking to SQL Server in the first place. Now imagine that
> you’re generating code and you don’t even know what the DB is. That’s where
> EF started. Very poor choices are often made in these tools. The whole
> reason that “optimize for ad hoc workloads” was added to SQL Server was to
> deal with the mess from the plan cache pollution caused by these tools. A
> simple example is that on the SqlCommand object, they called AddWithValue()
> to add parameters to the parameters collection. That’s a really bad idea.
> It provides the name of the parameter and the value, but no data type. So
> it used to try to derive the data type from the data. So SQL Server would
> end up with a separate query plan for every combination of every length of
> string for every parameter. And what could the developers do to fix it?
> Nothing. Because it was baked into how the framework worked. The framework
> eventually got changed a bit to have more generic sizes but still never
> addressed the actual issues.
>
>
>
> *Getting stuck with old code*
>
>
>
> SQL Server 2008 added a bunch of new data types. Spatial was one of my
> favourites. When did that get added to EF? Many, many, many years later.
> What could the developers do about it? Almost nothing except very hacky
> workarounds. When you use a framework like this, you are totally at the
> mercy of what its developers feel is important, and that’s if they haven’t
> already lost interest in it. When you code with a tool that was targeted
> cross-DB, you usually end up with a very poor choice of data types. You end
> up with lowest common denominator of everything, or you end up with
> something that doesn’t fit well with the framework.
>
>
>
> Many of the authors of the frameworks quickly lose interest. Do you want
> to be stuck with it?
>
>
>
> *Summary*
>
>
>
> Look, I could go on for hours on this stuff and on what I’ve seen. As I
> mentioned, it depends what you’re building. But if you’re taking
> architectural advice from someone who hasn’t built real systems at scale,
> at least get further opinions. “Oh I don’t want to write all that
> boilerplate code” I hear people say. Sure. I get that but are you incapable
> of code-generation? At least give yourself control over how the code works.
> And that’s mostly a one-time cost. You get to use it again, project after
> project. But at least if you don’t like it, you can fix it.
>
>
>
> I also see a lot of nonsense written about not wanting business logic in
> stored procedures. I love people who have strict rules about that. They
> keep us in business. “We don’t have people with strong T-SQL skills” I hear
> you say. When you have a room full of devs that are clones of each other,
> the lack of T-SQL skills isn’t the problem. The hiring decisions were.
> Yesterday I was in an organisation with around 500 developers. I’d say
> there are almost 2 people who are very data focussed yet data is at the
> core of everything they do. It’s good for me. Those devs can generate poor
> code faster than a handful of us can ever fix it.
>
>
>
> *Future*
>
>
>
> Also worth noting that if you want real performance from upcoming DB
> engines, you might want to rethink that. Even DocumentDB that others have
> mentioned has a concept of a stored procedure within the engine. You write
> them in Javascript. But there’s a reason they are there.
>
>
>
> But in SQL Server 2016, this is even more stark. I have simple examples on
> SQL Server 2016 where a batch of code sent from an app executes in 26
> seconds. If I move the table to in-memory, it runs in 25.5 seconds. If I
> move the code to a natively-compiled stored procedure, it runs in 60
> milliseconds.
>
>
>
> If you are building toy apps ie: a replacement for CardFile, none of this
> matters but I live in a world where it really, really matters. And so do
> the vast majority of software houses that I spend time in.
>
>
>
> Your call.
>
>
>
> Regards,
>
>
>
> Greg
>
>
>
> Dr Greg Low
>
>
>
> 1300SQLSQL (1300 775 775) office | +61 419201410 <+61%20419%20201%20410>
> mobile│ +61 3 8676 4913 <+61%203%208676%204913> fax
>
> SQL Down Under | Web: www.sqldownunder.com | http://greglow.me
>
>
>
> *From:* ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-bounces@
> ozdotnet.com] *On Behalf Of *David Connors
> *Sent:* Friday, 16 September 2016 1:20 PM
> *To:* ozDotNet <ozdotnet@ozdotnet.com>
> *Subject:* Re: Entity Framework - the lay of the land
>
>
>
> On Fri, 16 Sep 2016 at 11:56 Greg Keogh <gfke...@gmail.com> wrote:
>
> The people who think that ORMs are a good idea have a code-centric view of
> the world.
>
>
>
> Stored procs!
>
>
>
> I know, right? Finally, someone who shares my enthusiasm!
>
>
>
> [ ... ]
>
>
>
> Databases are unlikely to have a structure that suits coders.
>
>
>
> THIS ^^
>
>
>
> The goal of writing good quality / low TCO software is not to ensure that
> things suit a given coder's predilections along the way.
>
>
>
> What can bridge the "impedance" gap? Something has to.
>
>
>
> I agree. It is called effort.
>
>
>
> It doesn't matter how much you like stored procs, you still have to get
> stuff in and out of them across the gap to the coder's classes. How do
> procs help? Are you proposing that more business logic be moved into procs?
>
>
>
> It isn't a case of me likely stored procs, it is a case of them a
> declarative data tier both in terms of its state and also the paths of
> execution that affect it day to day. The things that change the state of
> the app become entirely deterministic as opposed to whatever BS EF cooks
> up.
>
>
>
> If so, that way lies madness, as you can't easily integrate proc code into
> source control, testing, versioning, builds, etc.
>
>
>
> File -> New -> Project -> SQL Server Database Project. Prepare to be
> amazeballsed.
>
>
>
> I've seen whole apps written in T-SQL, and it's quite frightening.
>
>
>
> I'm not advocating writing whole apps in T-SQL. Stored procs need to be
> short and punchy unless you want to spend your life diagnosing deadlocks.
>
>
>
> David.
>
>
>
>
>
> --
>
> David Connors
> da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363
> <+61%20417%20189%20363>
>

Reply via email to