Fw: Re: [OT] NBN and phone

2017-09-17 Thread

$19 at Bunnings. Does CAT5/6 and Phone. Worth it.

Regards,
 
Greg
 
Dr Greg Low
 
1300SQLSQL (1300 775 775) office | +61 419201410 [tel:+61%20419%20201%20410] 
mobile│ +61 3 8676 4913 [tel:+61%203%208676%204913] fax
SQL Down Under | Web: www.sqldownunder.com [http://www.sqldownunder.com/] 
|http://greglow.me [http://greglow.me/]
On 18/09/2017 8:13:13 AM, Greg Keogh  wrote:
Probably crap cable or connector. Here I grabbed one pair out of the 4 pair 
cable I run between floors, and then spliced that back into the old telephone 
line. Have you got a cable tester ?

I don't have a cable tester. I should have tested the cable with the phone on 
the bench, before I spent 20 minutes crawling under the house. But then again, 
who would expect a cable right out of the packet not to work? I think I'll 
waste more time extracting the cable, put it back in the original packet (which 
is in the bin), drive back to Jaycar to get an identical replacement, then try 
that on the bench. If that doesn't work then it's the type of cable or the 
length. What a waste of time and money.


Fw: Re: [OT] Post NBN problem

2017-09-10 Thread



From: Greg Low 
Date: 11/09/2017 12:03:57 PM
Subject: Re: [OT] Post NBN problem
To: ozdotnet@ozdotnet.com

Also consider not using a VM at all. Your life gets much easier when you deploy 
websites and DBs as platform services.


Regards,
 
Greg
 
Dr Greg Low
 
1300SQLSQL (1300 775 775) office | +61 419201410 [tel:+61%20419%20201%20410] 
mobile│ +61 3 8676 4913 [tel:+61%203%208676%204913] fax
SQL Down Under | Web: www.sqldownunder.com [http://www.sqldownunder.com/] 
|http://greglow.me [http://greglow.me/]
On 11/09/2017 10:50:32 AM, David Connors  wrote:

On Mon, 11 Sep 2017 at 08:17 Greg Keogh  wrote:

You can get an Azure VM which will only cost around $15-20 per month. Depends 
on you budget, but you can then install SQL Server Express easily.

I just created my first Azure VM for a few years as an experiment. I couldn't 
find the cheap A plans and I kept getting offered $80/month as the cheapest. 
D'oh you have to click a tiny "show all" link. The cheapest A0 plan is about 
$22/month -- GK

Don't you have MSDN? They give you a couple of hundred a month in free compute. 
 
--

David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363


Re: [OT] Post NBN problem

2017-09-09 Thread
Two thoughts:

1. Why not use a dynamic DNS service instead?
2. Why host websites at home on a dynamic service? (We used to do that so very 
long ago. I don't see much point to it now)

Regards,
 
Greg
 
Dr Greg Low
 
1300SQLSQL (1300 775 775) office | +61 419201410 [tel:+61%20419%20201%20410] 
mobile│ +61 3 8676 4913 [tel:+61%203%208676%204913] fax
SQL Down Under | Web: www.sqldownunder.com [http://www.sqldownunder.com/] 
|http://greglow.me [http://greglow.me/]
On 10/09/2017 9:12:51 AM, Greg Keogh  wrote:
Why don't you move to a provider that offers a fixed ip option?
Our Telstra contract expires in mid October, so after that we'll go NBN plan 
shopping! -- GK


RE: Stored procedure only ORM

2017-07-12 Thread
The #1 thing you must get right though is correctly typed (ie: data typed) 
parameters.

One of the issues with many of the frameworks is that under the covers they use 
methods like AddWithValue() to add values to a parameters collection.  The 
problem is that when you finally see those sorts of problems, they are often in 
code that you didn’t write.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Thursday, 13 July 2017 2:33 PM
To: ozDotNet 
Subject: Re: Stored procedure only ORM

There is nothing inherently wrong with an ORM just for stored procs, but what I 
think your are really talking about is the ability to drag your stored 
procedures onto a canvas and have it generate the code to be able to interact 
with the stored procedure. That's all ORMs really are: code generators, and 
most are not very good at it. The reason people steer clear of the more bulky 
ORMs is that they are slow to load, perform badly and sometimes generate some 
weird and wacky code.

I often use Entity Framework canvas to generate all the code for my stored 
procs. It wipes out a whole class of bugs, as it works so well that any issues 
are never due to the code generated, but are caused by issues in the stored 
procs itself or application logic. But I prefer not to use anything else. (I 
certainly don't use lazy loading, which might appear to give you some benefits 
in delivering features faster, but at the expense of massive technical debt, 
much slower performance, and usually a lot a network traffic. At the very 
least, use eager loading, and always be in control of exactly what is being 
loaded in each request.)

Some time ago I was considering writing a canvas to sit over the top of Dapper 
so that I could generate Dapper code on dragging and dropping a stored 
procedure onto the canvas, however the effort required to do that became 
prohibitive. It would have been awesome, though!






On Wed, Jul 12, 2017 at 6:19 PM, Greg Keogh 
> wrote:
Thanks I will check it out

I've used Dapper to read SQLite, but not much more than that. I found it easy 
to use, so I'll guess it's probably lightweight and good for SQL procs too -- GK



RE: AZURE SQL Data Sync

2017-07-10 Thread
Yep, it might well do that. Other option to consider might be SQL Server 
replication to an Azure SQL DB (if that’s not supported yet, it’s about to be).

But the one that I’ve used very successfully when I need a read-only copy of 
some of the on-premises data in the cloud is:


  *   Create the Azure SQL DB
  *   In the on-premises SQL box, create a linked server to the Azure SQL DB
  *   Use a SQL Agent job (or similar) to just push the required info up into 
the Azure SQL DB via the linked server.

We also found a few tricks while doing this. For example, if we had to merge 
the data from on-premises to the Azure SQL DB, instead of doing a MERGE 
command, we just did INSERT commands instead. Then at the cloud end, we created 
an INSTEAD OF trigger to replace the INSERT with a merge. We routinely ended up 
with better performance overall. I’d never do it that way in an on-premises box 
but in this case the latency was the thing I needed to avoid, and it’s easier 
to just push info up and sort it out at the other end, rather than trying to 
work it out from the on-premises end.

Hopefully we’ll soon get support for External DataSource and External Table 
objects in on-premises SQL. They are already there in Azure. They make this 
experience much better again.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Scott Barnes
Sent: Tuesday, 11 July 2017 9:49 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: AZURE SQL Data Sync

Greg,

Awesome response firstly.

Secondly, The intent of its use is to essentially provide a continuum strategy 
to moving legacy (Asp.net webforms) towards the cloud. The first part of the 
strategy is to move on-prem hosting into VM instance based hosting (cloud). In 
doing this there is a residual database(s) that on-prem cannot be moved into 
the cloud itself.

Example.
OnPrem there are two databases first being "Financial" and second being 
"Employee". There is also a website that reads/writes to both of these 
databases depending on a variety of contexts. However, the Financial database 
is quite large but the website only uses say 10% of its total structure for a 
specific set of needs (think of it as being 100 tables but only 5 get actually 
used).

One can move the Employee/Website from OnPrem into Cloud-Based Hosting and 
therefore remove the residual hosting of these two from OnPrem. One can also 
create a copy of the "Financial" Database (initial creation is empty) based on 
the actual used "parts" of the said database (5 tables). The website gets its 
Connection context updated to point at the VM.

The intent then is to use Data Sync to essentially push/pull data from onPrem 
to the cloud as data either is populated real-time or based off a scheduled 
interval (either option).

An objective for Azure SQL Data Sync is that its role in this strategy is to 
act as a transport to ensure OnPrem data is kept up to date in the cloud so 
that website(s) can look at the Azure SQL instance for read/writes as if the 
OnPrem were also moved.

Constraints on writes can also easily be avoided by using a background agent 
that manually feeds writes to the database, so one could also just assert that 
the Data Sync takes a volatile database onPrem and just pushes "snapshots in 
time" of said data into the cloud.

From what i've read this looks like its in Azure SQL wheel house... but given 
the volatility in Azure's weekly product management its important to not assume 
:)



---
Regards,
Scott Barnes
http://www.riagenic.com

On Mon, Jul 10, 2017 at 8:34 PM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Hi Scott

Up to a few months back, I would have said “run away fast”. But now not so sure.

This was a “product” that stayed in “preview” mode for so very long. Blog posts 
had long ago stopped and many of us for years had been asking if it was another 
product that was just silently dropped without actually being put to death.

But I met with Lindsey Allen last year and when we were discussing it, she said 
that it was going to GA. I was not expecting that. And sure enough, there has 
been some life back in the blog, etc. lately, and some updates did occur to the 
code. It’s moved across into the new Azure portal.

It’s still quite a distance from being what I’d really consider a strong 
product but I hope it succeeds as there is a real need for it. One of the 
biggest limitations is the number of replicas involved.

What are you looking to use it for? Often there are better alternatives.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775<tel:1300%20775%20775>) office | +61 
419201410<tel:0419%20201%20410> mob

RE: AZURE SQL Data Sync

2017-07-10 Thread
Hi Scott

Up to a few months back, I would have said “run away fast”. But now not so sure.

This was a “product” that stayed in “preview” mode for so very long. Blog posts 
had long ago stopped and many of us for years had been asking if it was another 
product that was just silently dropped without actually being put to death.

But I met with Lindsey Allen last year and when we were discussing it, she said 
that it was going to GA. I was not expecting that. And sure enough, there has 
been some life back in the blog, etc. lately, and some updates did occur to the 
code. It’s moved across into the new Azure portal.

It’s still quite a distance from being what I’d really consider a strong 
product but I hope it succeeds as there is a real need for it. One of the 
biggest limitations is the number of replicas involved.

What are you looking to use it for? Often there are better alternatives.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Scott Barnes
Sent: Monday, 10 July 2017 2:25 PM
To: ozDotNet 
Subject: AZURE SQL Data Sync

Anyone have experience using Azure SQL Data Sync?  Any "If they only put this 
on the back of the brochure" moments that left you with buyers remorse?


---
Regards,
Scott Barnes
http://www.riagenic.com


RE: Global SQL Server timeout

2017-07-09 Thread
Yep, no connection timeout != command timeout.

You need to have a command timeout set on each command that’s created. Sucks if 
the devs didn’t do that up front but I see this often. Works great until 
queries take more than 30 seconds.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Preet Sangha
Sent: Monday, 10 July 2017 12:50 PM
To: ozDotNet 
Subject: Re: Global SQL Server timeout

;Connection Timeout=XX in the connection string

where XX is in seconds is the way to do it in a single connection string.

I don't know about EL


regards,
Preet, in Auckland NZ


On 10 July 2017 at 14:27, Greg Keogh 
> wrote:
Folks, I have some old code that uses a mixture of Enterprise Library 5 and 
traditional ADO.NET classes. On some machines I'm getting 
command timeouts at 30 seconds. Is there a way of globally changing the timeout 
for all commands on the connection, perhaps by changing the connection string?

I could get into every db call and set the timeout on each command, but there 
are hundreds of them. That's why I'm looking for some global change that avoids 
code changes.

Greg K



Re: Powershell testing

2017-07-08 Thread
Hi Greg

There are some very good PS courses on MVA, right from the horse's (Snover's) 
mouth.

I have moments with PS where I'm stunned how much I get done and how fast. 
Other days, I want to poke my eyes out with a sharp stick after trying to do 
something that seemed trivial.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com  on behalf 
of Greg Keogh 
Sent: Saturday, July 8, 2017 3:28:01 PM
To: ozDotNet
Subject: Re: Powershell testing

I think I found the cause of the weird behaviour. You have to put function 
definitions at the top, before the outer level code. It's not like C/C++/C# or 
a real language, it's read from the top down (event BAT files are better than 
that). I don't know how it intermittently worked, but I supposed it's cached 
the functions, it's the only crazy explanation.

Calling COM functions from PS is really fragile work. The slightest whim of a 
mistake will result in a cascade of incomprehensible errors.

Last night I wondered why the script was failing for over half an hour without 
any useful message. It turns out I had the MSI file open in Orca and couldn't 
see the window, which was causing the script to crash with a generic COM error.

GK

On 8 July 2017 at 15:10, Greg Keogh 
> wrote:
How to people write PowerShell scripts in a productive way?

I'm writing my first non-trivial one to query and update the tables in MSI 
files. I reckon I have wasted more than half the coding time (many hours)on 
utterly infuriating and bewildering problems. I'm writing the script in ISE and 
I hit F5 to run it and check the results.

After making a code change I hit F5 and it runs the code before the change, the 
second F5 runs with the change. Even saving the script makes do different and I 
have to hit F5 twice to see the results.

If I make a small error the script crashes. Sometimes when I fix the error it 
continues to crash with an unrelated error as if I never corrected it. It's 
like an error gets "stuck" and can't be undone. I made a tiny correctional 
change about 15 minutes ago and the script crashes non-stop with a "not 
recognised" error, which I know is bullshit. I've even tried rebooting, but the 
correct script is still crashing.

Is this a joke. What century am I in?

Greg K



RE: [OT] New surface laptop

2017-07-04 Thread
Yep, been ok, but mostly use it with a mouse anyway

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tom Rutter
Sent: Wednesday, 5 July 2017 1:28 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] New surface laptop

How is the touchpad on the spectre?

On Wed, Jul 5, 2017 at 10:54 AM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
100% agree. Have an HP 360 Spectre i7 (quite like it BTW) but needing to dongle 
to get LTE is a pain in the neck. (1st world problem) Love my P50 with built-in 
fast LTE.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of David Connors
Sent: Wednesday, 5 July 2017 10:23 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] New surface laptop

On Wed, 5 Jul 2017 at 10:04 Tom Rutter 
<therut...@gmail.com<mailto:therut...@gmail.com>> wrote:
There's no real point to buy a surface pro unless you need the pen capability 
and tablet mode. The reason I was thinking of going with a microsoft laptop was 
so I don't get the bloatware I'm used to dealing with

Lack of LTE is a deal-breaker for me on the MS lappies. Yes, you can tether or 
carry a WiFI dongle with you - but one you've experienced the convenience of 
onboard LTE you never go back.

It boggles my mind how their hardware people don't put modems in these devices 
when MS is so committed to the cloud.

David.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363<tel:+61%20417%20189%20363>



RE: [OT] New surface laptop

2017-07-04 Thread
100% agree. Have an HP 360 Spectre i7 (quite like it BTW) but needing to dongle 
to get LTE is a pain in the neck. (1st world problem) Love my P50 with built-in 
fast LTE.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Wednesday, 5 July 2017 10:23 AM
To: ozDotNet 
Subject: Re: [OT] New surface laptop

On Wed, 5 Jul 2017 at 10:04 Tom Rutter 
> wrote:
There's no real point to buy a surface pro unless you need the pen capability 
and tablet mode. The reason I was thinking of going with a microsoft laptop was 
so I don't get the bloatware I'm used to dealing with

Lack of LTE is a deal-breaker for me on the MS lappies. Yes, you can tether or 
carry a WiFI dongle with you - but one you've experienced the convenience of 
onboard LTE you never go back.

It boggles my mind how their hardware people don't put modems in these devices 
when MS is so committed to the cloud.

David.

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363


RE: Azure Active Directory

2017-06-21 Thread
Sorry “upload”. Suddenly realised the typo could sound like a real thing. 
(ipload)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Thursday, 22 June 2017 5:56 AM
To: piers.willi...@gmail.com; ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Azure Active Directory


This sender failed our fraud detection checks and may not be who they appear to 
be. Learn about spoofing<http://aka.ms/LearnAboutSpoofing>

Feedback<http://aka.ms/SafetyTipsFeedback>

If it's a one off, you can just ipload a CSV. I'm presuming there is an ongoing 
need.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

From: piers.willi...@gmail.com<mailto:piers.willi...@gmail.com> 
<piers.willi...@gmail.com<mailto:piers.willi...@gmail.com>>
Sent: Thursday, June 22, 2017 1:21:33 AM
To: Greg Low (罗格雷格博士); ozDotNet
Subject: RE: Azure Active Directory

That seems like hard work compared to using the powershell cmdlets. If you had 
an ongoing integration that needed to do it, sure. Hit the REST api for a 
one-off? Hmm.

(googles ‘bulk create users in azure active directory’)
https://blogs.msdn.microsoft.com/charles_sterling/2015/06/29/creating-users-in-an-azure-ad-in-bulk/

From: Greg Low (罗格雷格博士)<mailto:g...@greglow.com>
Sent: Wednesday, 21 June 2017 4:44 PM
To: ozDotNet<mailto:ozdotnet@ozdotnet.com>
Subject: Re: Azure Active Directory

Hi Greg

You make a call to get a token, then call the graph api to create users. 
https://msdn.microsoft.com/library/azure/ad/graph/api/users-operations#CreateUser


Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Greg Keogh <gfke...@gmail.com<mailto:gfke...@gmail.com>>
Sent: Wednesday, June 21, 2017 5:12:25 PM
To: ozDotNet
Subject: Re: Azure Active Directory

Chaps, I spent almost four hours this afternoon attempting to write some 
managed code that authenticated a user/password against Azure AD from a native 
app. I know you're not supposed to handle credentials like that, but it was an 
experiment for migration of the old database. I read hundreds of confusing and 
conflicting articles on the subject and they generally say it's possible, but I 
failed despite heroic and obtuse efforts. I presume it just doesn't work the 
way I think and I'm using the frameworks incorrectly. To be honest, I'm not 
even sure I had the AD environment setup correctly, so I might have been doubly 
wasting my time.

Now I'm wondering how I would migrate hundreds of users from the legacy 
database into Azure AD. Anyone done that?

GK

On 21 June 2017 at 12:19, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
AAD is a wonderful tool really. Keep in mind that it has a couple of flavours, 
B2C (business to consumer) being the latest.

I’ve got clients who moved to it and simply love it. One is a car manufacturer 
who used to have to manage domains for dealers, etc. They used to spend their 
life with password and access issues. Now they just use 2 factor auth and 
cloud-based password reset, etc. and that’s all pretty much disappeared.

It’s also worth thinking about the fact that AAD is what anyone using Office 
365 will already be using anyway. And it can then be the directory for a big 
range of other things – Microsoft stuff like Power BI, Flow, Office 365, etc. 
but also others like DropBox, ZenDesk, etc, etc, etc.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Keogh
Sent: Wednesday, 21 June 2017 10:45 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Azure Active Directory

Yooiks! I'm not quite sure what I want (which is a worry). WAAD vs AADDS

You say WAAD is more light-weight, which probably suits us, I think.

Overall, as a coder, I want to put all authentication and permission/roles 
information for all of our apps and users in a single place where it can be 
maint

Re: Azure Active Directory

2017-06-21 Thread
If it's a one off, you can just ipload a CSV. I'm presuming there is an ongoing 
need.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

From: piers.willi...@gmail.com <piers.willi...@gmail.com>
Sent: Thursday, June 22, 2017 1:21:33 AM
To: Greg Low (罗格雷格博士); ozDotNet
Subject: RE: Azure Active Directory

That seems like hard work compared to using the powershell cmdlets. If you had 
an ongoing integration that needed to do it, sure. Hit the REST api for a 
one-off? Hmm.

(googles ‘bulk create users in azure active directory’)
https://blogs.msdn.microsoft.com/charles_sterling/2015/06/29/creating-users-in-an-azure-ad-in-bulk/

From: Greg Low (罗格雷格博士)<mailto:g...@greglow.com>
Sent: Wednesday, 21 June 2017 4:44 PM
To: ozDotNet<mailto:ozdotnet@ozdotnet.com>
Subject: Re: Azure Active Directory

Hi Greg

You make a call to get a token, then call the graph api to create users. 
https://msdn.microsoft.com/library/azure/ad/graph/api/users-operations#CreateUser


Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

From: ozdotnet-boun...@ozdotnet.com <ozdotnet-boun...@ozdotnet.com> on behalf 
of Greg Keogh <gfke...@gmail.com>
Sent: Wednesday, June 21, 2017 5:12:25 PM
To: ozDotNet
Subject: Re: Azure Active Directory

Chaps, I spent almost four hours this afternoon attempting to write some 
managed code that authenticated a user/password against Azure AD from a native 
app. I know you're not supposed to handle credentials like that, but it was an 
experiment for migration of the old database. I read hundreds of confusing and 
conflicting articles on the subject and they generally say it's possible, but I 
failed despite heroic and obtuse efforts. I presume it just doesn't work the 
way I think and I'm using the frameworks incorrectly. To be honest, I'm not 
even sure I had the AD environment setup correctly, so I might have been doubly 
wasting my time.

Now I'm wondering how I would migrate hundreds of users from the legacy 
database into Azure AD. Anyone done that?

GK

On 21 June 2017 at 12:19, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
AAD is a wonderful tool really. Keep in mind that it has a couple of flavours, 
B2C (business to consumer) being the latest.

I’ve got clients who moved to it and simply love it. One is a car manufacturer 
who used to have to manage domains for dealers, etc. They used to spend their 
life with password and access issues. Now they just use 2 factor auth and 
cloud-based password reset, etc. and that’s all pretty much disappeared.

It’s also worth thinking about the fact that AAD is what anyone using Office 
365 will already be using anyway. And it can then be the directory for a big 
range of other things �C Microsoft stuff like Power BI, Flow, Office 365, etc. 
but also others like DropBox, ZenDesk, etc, etc, etc.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Keogh
Sent: Wednesday, 21 June 2017 10:45 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Azure Active Directory

Yooiks! I'm not quite sure what I want (which is a worry). WAAD vs AADDS

You say WAAD is more light-weight, which probably suits us, I think.

Overall, as a coder, I want to put all authentication and permission/roles 
information for all of our apps and users in a single place where it can be 
maintained by admin staff, and it's easy to query from .NET code.

Am I wrong to regard WAAD as some sort of "magic" database to where I can stuff 
all our vintage data? Perhaps I'm thinking like a reductionist and expecting a 
quick fix.

If all you need to do is put WAAD authentication in front of a web app, then 
this is a piece of piss. Just deploy your app into App Server or App Service 
Environment and then turn on Azure AD auth. The App Service intercepts requests 
and does the SAML login for you transparently. The logged on user gets 
presented back to the app in a cookie.

This is a good clue. I'll look into the details of doing this.

GK




Re: Azure Active Directory

2017-06-21 Thread
Hi Greg

You make a call to get a token, then call the graph api to create users. 
https://msdn.microsoft.com/library/azure/ad/graph/api/users-operations#CreateUser


Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

From: ozdotnet-boun...@ozdotnet.com <ozdotnet-boun...@ozdotnet.com> on behalf 
of Greg Keogh <gfke...@gmail.com>
Sent: Wednesday, June 21, 2017 5:12:25 PM
To: ozDotNet
Subject: Re: Azure Active Directory

Chaps, I spent almost four hours this afternoon attempting to write some 
managed code that authenticated a user/password against Azure AD from a native 
app. I know you're not supposed to handle credentials like that, but it was an 
experiment for migration of the old database. I read hundreds of confusing and 
conflicting articles on the subject and they generally say it's possible, but I 
failed despite heroic and obtuse efforts. I presume it just doesn't work the 
way I think and I'm using the frameworks incorrectly. To be honest, I'm not 
even sure I had the AD environment setup correctly, so I might have been doubly 
wasting my time.

Now I'm wondering how I would migrate hundreds of users from the legacy 
database into Azure AD. Anyone done that?

GK

On 21 June 2017 at 12:19, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
AAD is a wonderful tool really. Keep in mind that it has a couple of flavours, 
B2C (business to consumer) being the latest.

I’ve got clients who moved to it and simply love it. One is a car manufacturer 
who used to have to manage domains for dealers, etc. They used to spend their 
life with password and access issues. Now they just use 2 factor auth and 
cloud-based password reset, etc. and that’s all pretty much disappeared.

It’s also worth thinking about the fact that AAD is what anyone using Office 
365 will already be using anyway. And it can then be the directory for a big 
range of other things �C Microsoft stuff like Power BI, Flow, Office 365, etc. 
but also others like DropBox, ZenDesk, etc, etc, etc.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Keogh
Sent: Wednesday, 21 June 2017 10:45 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Azure Active Directory

Yooiks! I'm not quite sure what I want (which is a worry). WAAD vs AADDS

You say WAAD is more light-weight, which probably suits us, I think.

Overall, as a coder, I want to put all authentication and permission/roles 
information for all of our apps and users in a single place where it can be 
maintained by admin staff, and it's easy to query from .NET code.

Am I wrong to regard WAAD as some sort of "magic" database to where I can stuff 
all our vintage data? Perhaps I'm thinking like a reductionist and expecting a 
quick fix.

If all you need to do is put WAAD authentication in front of a web app, then 
this is a piece of piss. Just deploy your app into App Server or App Service 
Environment and then turn on Azure AD auth. The App Service intercepts requests 
and does the SAML login for you transparently. The logged on user gets 
presented back to the app in a cookie.

This is a good clue. I'll look into the details of doing this.

GK



RE: Azure Active Directory

2017-06-20 Thread
AAD is a wonderful tool really. Keep in mind that it has a couple of flavours, 
B2C (business to consumer) being the latest.

I’ve got clients who moved to it and simply love it. One is a car manufacturer 
who used to have to manage domains for dealers, etc. They used to spend their 
life with password and access issues. Now they just use 2 factor auth and 
cloud-based password reset, etc. and that’s all pretty much disappeared.

It’s also worth thinking about the fact that AAD is what anyone using Office 
365 will already be using anyway. And it can then be the directory for a big 
range of other things – Microsoft stuff like Power BI, Flow, Office 365, etc. 
but also others like DropBox, ZenDesk, etc, etc, etc.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Wednesday, 21 June 2017 10:45 AM
To: ozDotNet 
Subject: Re: Azure Active Directory

Yooiks! I'm not quite sure what I want (which is a worry). WAAD vs AADDS

You say WAAD is more light-weight, which probably suits us, I think.

Overall, as a coder, I want to put all authentication and permission/roles 
information for all of our apps and users in a single place where it can be 
maintained by admin staff, and it's easy to query from .NET code.

Am I wrong to regard WAAD as some sort of "magic" database to where I can stuff 
all our vintage data? Perhaps I'm thinking like a reductionist and expecting a 
quick fix.

If all you need to do is put WAAD authentication in front of a web app, then 
this is a piece of piss. Just deploy your app into App Server or App Service 
Environment and then turn on Azure AD auth. The App Service intercepts requests 
and does the SAML login for you transparently. The logged on user gets 
presented back to the app in a cookie.

This is a good clue. I'll look into the details of doing this.

GK


RE: [OT] Sit/stand desk results

2017-06-19 Thread
Yep, it’s the same logic that got cholesterol in so much trouble. They assumed 
that eating cholesterol increased your cholesterol level. Never been true 
though yet it was widely promoted and stopped people eating eggs, etc. for 
decades.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Tuesday, 20 June 2017 3:48 PM
To: ozDotNet 
Subject: Re: [OT] Sit/stand desk results

When I was a wee lad I remember doing science experiments that showed that not 
all compounds were equal, and some chemical reactions produced more energy than 
others. If you consume food that doesn't digest it might have lots of calories 
but you won't be consuming any of those calories. So this whole idea of 
calories consumed equalling calories stored or used doesn't actually make sense 
to me. It's the compounds that count and the chemical reactions on those 
compounds. Glucose gets metabolised all over the body, but fructose gets 
digested only in the liver. It's a totally different set of chemical reactions 
going on.

On 20 Jun 2017 3:39 PM, "David Richards" 
> wrote:
I've been avoiding this conversation.  I've had arguments with friends over 
this.  However, I decided to give my two kilojoules since, while its OT, its 
very relevant to IT types that generally tend to have a sedentary life style.

There is a fundamental law of physics at work here: Conservation of energy.  
The change in energy in a system (fat, glucose, protein, etc) is energy in 
(kilojoules absorbed from food) minus energy out (moving, thinking, living).  
It doesn't matter what your body does or what form the energy is in. If you use 
more energy than you absorb, you will lose weight.

I've counted kilojoules, tracked exercise and monitored weight.  Doing this, I 
was able to lose weight quite successfully and with little difficulty.  People 
mention hormones and starvation mode, etc.  This doesn't somehow override 
conservation of energy.  It just means you have to continually monitor how your 
weight is changing based on the kilojoules in/out.  As your body becomes more 
efficient at absorbing energy and more efficient at living, you will need to 
decrease the kilojoules in to compensate.

My anecdotal example:  I would set a target average daily kilojoule intake 
(averaged over each week) and monitor my weight.  When it stopped going down, I 
decrease my target daily average until I started losing weight again.  When I 
started, my daily target was around 8000 kJ (before that I was eating closer to 
1 kJ).  By the time I got to my target weight, I had decreased it to 6000 
kJ.  I was less hungry, had more energy, ate healthier and spent less money on 
food.

David

"If we can hit that bullseye, the rest of the dominoes
 will fall like a house of cards... checkmate!"
 -Zapp Brannigan, Futurama

On 20 June 2017 at 14:32, Bec C 
> wrote:
I'd have to respectfully disagree. Tried it and lost weight.


On Tuesday, 20 June 2017, Stephen Price 
> wrote:
Nope.

If you cut calories and have any carbs in your system then you will have 
insulin in your system and your body will be in storing mode. Impossible to 
lose ANY weight if you are only storing.

To bring it back on topic for the list it would be like being only able to 
append records to a database table and not be able to delete. If you can never 
delete then its impossible to make the table smaller.

Insulin = store only.

It's hormonal not caloric. You would put weight on if your lower calories were 
high carb/sugars. Try it.




RE: [OT] Sit/stand desk results

2017-06-19 Thread
A great example of the problem is that almost everyone that’s ever been on 
shows like “Biggest Loser” are now heavier than when they went on the show. 
Worse, most now have slower metabolisms and are worse off than if they’d never 
heard of the show. Yet that sort of caloric reduction and exercise is still 
what most medicos preach, despite all evidence to the contrary.

I’ve been type 2 for a few years. I know that if I’d followed what the diabetes 
educator told me, I’d still be on medication, probably moving towards insulin 
injections.

Instead, I removed the need for it within 6 months. Hope to never need it 
again. But carbs (and particularly sugar) were the culprit for me.

I recently had a specialist ask me what my HBA1C was like. I said “last time it 
was 5.8”. He asked me how I kept it down. I said “by modifying what I eat”. He 
said “I find that extremely hard to believe”. And I’m sure, based on what he’s 
been taught, that that’s what he expected. So he did another test and it was 
5.7.

The look on his face and his “that’s remarkable” comment was worth it.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Richards
Sent: Tuesday, 20 June 2017 3:40 PM
To: ozDotNet 
Subject: Re: [OT] Sit/stand desk results

I've been avoiding this conversation.  I've had arguments with friends over 
this.  However, I decided to give my two kilojoules since, while its OT, its 
very relevant to IT types that generally tend to have a sedentary life style.

There is a fundamental law of physics at work here: Conservation of energy.  
The change in energy in a system (fat, glucose, protein, etc) is energy in 
(kilojoules absorbed from food) minus energy out (moving, thinking, living).  
It doesn't matter what your body does or what form the energy is in. If you use 
more energy than you absorb, you will lose weight.

I've counted kilojoules, tracked exercise and monitored weight.  Doing this, I 
was able to lose weight quite successfully and with little difficulty.  People 
mention hormones and starvation mode, etc.  This doesn't somehow override 
conservation of energy.  It just means you have to continually monitor how your 
weight is changing based on the kilojoules in/out.  As your body becomes more 
efficient at absorbing energy and more efficient at living, you will need to 
decrease the kilojoules in to compensate.

My anecdotal example:  I would set a target average daily kilojoule intake 
(averaged over each week) and monitor my weight.  When it stopped going down, I 
decrease my target daily average until I started losing weight again.  When I 
started, my daily target was around 8000 kJ (before that I was eating closer to 
1 kJ).  By the time I got to my target weight, I had decreased it to 6000 
kJ.  I was less hungry, had more energy, ate healthier and spent less money on 
food.

David

"If we can hit that bullseye, the rest of the dominoes
 will fall like a house of cards... checkmate!"
 -Zapp Brannigan, Futurama

On 20 June 2017 at 14:32, Bec C 
> wrote:
I'd have to respectfully disagree. Tried it and lost weight.


On Tuesday, 20 June 2017, Stephen Price 
> wrote:
Nope.

If you cut calories and have any carbs in your system then you will have 
insulin in your system and your body will be in storing mode. Impossible to 
lose ANY weight if you are only storing.

To bring it back on topic for the list it would be like being only able to 
append records to a database table and not be able to delete. If you can never 
delete then its impossible to make the table smaller.

Insulin = store only.

It's hormonal not caloric. You would put weight on if your lower calories were 
high carb/sugars. Try it.




RE: [OT] Sit/stand desk results

2017-06-19 Thread
Yep, agreed. And any of Gary Taub’s content.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Piers Williams
Sent: Tuesday, 20 June 2017 1:30 PM
To: ozDotNet 
Subject: Re: [OT] Sit/stand desk results

'As far as losing weight goes it is all about calories'

Read The Case Against Sugar or Pure White and Deadly, or watch That Sugar Film, 
or The Men That Made Us Fat. They all make the point that the basic 
biochemistry (which is well established) absolutely disagrees with this. Fat, 
glucose and fructose all have very different pathways for metabolism, which 
makes a lie of the 'calorie is a calorie' mantra (itself accused of being an 
invention of the sugar industry). In That Sugar Film (admittedly a sample size 
of one) he puts on significant weight without changing total calorific intake, 
by swapping fat for sugar (and explains why).

I take everything I read highly skeptically, but in particular The Case Against 
Sugar is very comprehensively argued and well worth reading. The historical 
context is particularly damming.

On 20 Jun. 2017 08:53, "Bec C" 
> wrote:
You can be an idiot on any diet. I wouldn't believe everything you read either. 
I've seen studies that totally contradict each other.

Just for the record I'm not actually vegan. I tried it a few years ago.

As far as losing weight goes it is all about calories. Being healthy is a whole 
different thing.

Anyway too off topic now

On Tuesday, 20 June 2017, Piers Williams 
> wrote:
OOTT: At the risk of starting a flame war, I'm going to call shenanigans on 
this one (sorry Bec). Whilst most vegans probably have very healthy diets (due 
to increased awareness of what they eat) there's nothing inherent in veganism 
that actually ensures this, as a quick scan down the vegan society pages 
confirms: https://www.vegansociety.com/resources/lifestyle/food-and-drink. 
Plenty of sugary treats in that list described as vegan, even beans on toast is 
packed with the stuff.

It's *not* about the calories. 
https://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin

OOTT= off off-topic topic

On 20 Jun. 2017 06:29, "Bec C" 
> wrote:
Yep that podcast is fairly good. Veganism also works for losing weight, very 
hard to eat excess calories on a vegan diet.

On Tuesday, 20 June 2017, Stephen Price 
> wrote:
Totally agree on this point. I've been ketogenic for six months now (lost 6kg 
in the first month, have plateaued now but feel great). Some .net people may 
know Carl Franklin's been podcasting at 2ketodudes.com, 
and he's done an awesome job recording his progress. 6 months and he lost 80lb 
and is no longer type 2 diabetic.
Got so much out of it, I backed his kickstarter project to turn his town keto 
for a weekend. Flying out with my wife in a couple of weeks. Will be seeing the 
sights in New York, then up to New London for ketofest.
Btw, you don't have to be over weight to suffer the damaging effects of too 
much carbs/sugar. The inflammatory damage in your veins can't be seen from the 
outside.

One of the strange side effects I have noticed is that some days I forget to 
eat. Today, I had accidentally turned off my alarm so was running a bit late. 
Went to work with no breakfast, had one coffee at work, and worked right 
through lunch as I hadn't taken anything and office is a bit of a drive from 
places to eat. Barely noticed.
Don't miss sugar. Finding some awesome recipes along the way. Recently made 
deep fried chicken crumbed in pork rinds combined with Parmesan cheese.
So good. Hmm... this might possibly be the first recipe shared on this elist. :)

Anyway to keep on topic, had a standup desk and my last project, one of those 
motorised ones. Great for exercise and strengthening but not losing weight. 
What you put in your body has way more effect in that regard.  You can lose 
weight with zero exercise, but exercise is important for other reasons. I.e. 
Preventing muscles wasting away. If you don't use it, you lose it.

Cheers,
Stephen

From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Piers Williams >
Sent: Monday, June 19, 2017 8:46:35 PM
To: ozDotNet
Subject: Re: [OT] Sit/stand desk results

There are quite a few people in my office now using sit-to-stand desks. I sent 
a scary article 

RE: Logo for Ozdotnet

2017-04-30 Thread
Yep, love the first one out of those.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Monday, 1 May 2017 3:13 PM
To: ozDotNet 
Subject: Re: Logo for Ozdotnet


Hey all,



I got one more logo out of Katie (the request to have an animal in the logo...)



See attached logos for the current best ones. I'll count up any replies to help 
decide.



cheers,

Stephen


From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Stephen Price >
Sent: Monday, 17 April 2017 10:22:16 PM
To: ozDotNet
Subject: Re: Logo for Ozdotnet


I'll see if she has had a chance to do any work on it over the weekend, and 
will pass back the feedback about the states.

I don't mind either way. Will get a few final ones and we can vote which one it 
should be.



Hope everyone had a nice long weekend! I know I did.




From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Tom Rutter >
Sent: Friday, 14 April 2017 11:34:29 AM
To: ozDotNet
Subject: Re: Logo for Ozdotnet

Your daughter has done well. As others have mentioned the states should either 
be divided correctly or not at all imo. Also any chance of the domain being 
oz.net? Anything else is uncool as far as I'm concerned.

On Monday, 10 April 2017, Stephen Price 
> wrote:
Passed feedback to Katie and here's what she came up with.
Three versions, a dark, light and a banner version.

[cid:image001.png@01D2C292.E456C030][cid:image002.png@01D2C292.E456C030][cid:image003.png@01D2C292.E456C030]

I really love these ones, think she'd done a great job.

cheers
Stephen



From: 
ozdotnet-boun...@ozdotnet.com
 
>
 on behalf of David Connors 
>
Sent: Monday, 10 April 2017 3:56:58 PM
To: ozDotNet
Subject: Re: Logo for Ozdotnet

I'd make the text the logo. Greg's SQL downunder logo is a good model ... 
http://www.sqldownunder.com/SQLDownUnderSquareLogo2.jpg

On Mon, 10 Apr 2017 at 17:49 Stephen Price 
>
 wrote:

Yeah, I think it needs to be accurate and not confusing.



I'll get her to make it all on one line (with and without the .com on the end).

I like the .com as then it can be used as advertising. Can always have two 
versions I guess. One with and one without the .com.



We also have ozdotnet.com.au and more recently 
ozdotnet.io. (because some dumbarse thought that would be a 
much better domain name... sounds geeky)



So far option 6 is winning.


From: 
ozdotnet-boun...@ozdotnet.com
 
>
 on behalf of David Richards 
>
Sent: Monday, 10 April 2017 3:43:47 PM
To: ozDotNet
Subject: Re: Logo for Ozdotnet

#6 is the best f those presented.  Some suggestions:
- I think "oz.net.com" is just unclear.
- We probably don't need the ".com" as part of a logo.  The actual website can 
be additional text below the logo on any promotional material rather than part 
of the logo.
- Did you try "ozdotnet" on a single line?

David

"If we can hit that bullseye, the rest of the dominoes
 will fall like a house of cards... checkmate!"
 -Zapp Brannigan, Futurama

On 10 April 2017 at 16:48, Ian Thomas 
> 
wrote:
#2 – I like the demarcation of the states. Also, it looks OK in monochrome.


Ian Thomas
Albert Park, Victoria 3206 Australia

From: 
ozdotnet-boun...@ozdotnet.com
 
[mailto:ozdotnet-boun...@ozdotnet.com]
 On Behalf Of Stephen Price
Sent: Monday, 10 April 2017 3:59 PM
To: ozDotNet 
>
Subject: Logo for 

RE: Unit testing question and stored procedures

2017-04-25 Thread
Yep, that’s how we do it. Some people use transactions to do a similar thing 
but you can’t test transactional code by doing that.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Wednesday, 26 April 2017 3:08 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Unit testing question and stored procedures

So let me understand this. I believe what you are doing is having a database 
snapshot (or testing database) that you can continuously revert to its initial 
state, then you run the stored proc via nunit, then in the init for the next 
test, revert back to the initial state and run that test, etc.  I would have 
thought that it would take a lot of extra processing time to run tests that 
way, especially if a restore is needed?

I've used in memory databases (via the database first philosophy of EF entity 
creation) but they don't handle stored procs.

TSQLUnit looks...interesting. Must investigate.

On Wed, Apr 26, 2017 at 12:48 PM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I should have added that the dac framework stuff had testing but has now 
removed it.

Some use TSQLUnit but I’ve not found it any more useful and NUnit fits well 
with other testing.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Tony Wright
Sent: Wednesday, 26 April 2017 11:53 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Unit testing question and stored procedures

Hi all,

A while ago, we were discussing avoiding using LINQ to query sql server. The 
preferred method of querying discussed was either to use direct SQL calls or 
stored procs to perform data manipulation.

This was because the overhead of starting up Entity Framework is significant 
and the underlying queries produced by LINQ can be quite convoluted and 
inefficient. Lazy loading is also something to be avoided (at the very least 
you should be using Eager loading – which forced you to be explicit about what 
related data is being included/loaded. As an aside, I’ve also seen a massive 
performance drop when using mappers to covert database objects in EF to POCO 
objects using tools such as AutoMapper.)

Add to this, that putting some business logic in stored procs is about the most 
efficient way to perform data manipulation in a SQL Server database. It is 
unbelievably fast and efficient compared to passing all the data over the wire 
to your middle tier to perform any updates and then passing it back to commit 
the data to the database.

In fact, I would argue that the very fact that current “best practice” is to 
inefficiently pass all your data to the middle-tier to be modified, only to be 
returned to the database for the update, is a failure in modern development, 
but of course, there is not really an alternative if your intent is to 
performing proper unit testing. It is a very sad thing that modern enterprise 
development has not worked out how to utilise the full power of SQL Server 
other than to say "only use stored procs in special cases."

So the question I have is, if it was decided to put business logic in stored 
procedures (and some of you have, I know, even though a few of you with the 
purist hat would deny it!), how do people currently unit test their stored 
procs?

Kind regards,
Tony



RE: Unit testing question and stored procedures

2017-04-25 Thread
I should have added that the dac framework stuff had testing but has now 
removed it.

Some use TSQLUnit but I’ve not found it any more useful and NUnit fits well 
with other testing.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Wednesday, 26 April 2017 11:53 AM
To: ozDotNet 
Subject: Unit testing question and stored procedures

Hi all,

A while ago, we were discussing avoiding using LINQ to query sql server. The 
preferred method of querying discussed was either to use direct SQL calls or 
stored procs to perform data manipulation.

This was because the overhead of starting up Entity Framework is significant 
and the underlying queries produced by LINQ can be quite convoluted and 
inefficient. Lazy loading is also something to be avoided (at the very least 
you should be using Eager loading – which forced you to be explicit about what 
related data is being included/loaded. As an aside, I’ve also seen a massive 
performance drop when using mappers to covert database objects in EF to POCO 
objects using tools such as AutoMapper.)

Add to this, that putting some business logic in stored procs is about the most 
efficient way to perform data manipulation in a SQL Server database. It is 
unbelievably fast and efficient compared to passing all the data over the wire 
to your middle tier to perform any updates and then passing it back to commit 
the data to the database.

In fact, I would argue that the very fact that current “best practice” is to 
inefficiently pass all your data to the middle-tier to be modified, only to be 
returned to the database for the update, is a failure in modern development, 
but of course, there is not really an alternative if your intent is to 
performing proper unit testing. It is a very sad thing that modern enterprise 
development has not worked out how to utilise the full power of SQL Server 
other than to say "only use stored procs in special cases."

So the question I have is, if it was decided to put business logic in stored 
procedures (and some of you have, I know, even though a few of you with the 
purist hat would deny it!), how do people currently unit test their stored 
procs?

Kind regards,
Tony


RE: Unit testing question and stored procedures

2017-04-25 Thread
Hi Tony,

I’d still just use something like NUnit, along with all the other tests in your 
.NET code. Just put a wrapper calling them. Makes it easy to integrate with 
other tests.

One thing that I often do as well, is to have a wrapper that uses database 
snapshots, to get the test DB back into exactly the right state before each 
test. (the one exception is when testing performance)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Wednesday, 26 April 2017 11:53 AM
To: ozDotNet 
Subject: Unit testing question and stored procedures

Hi all,

A while ago, we were discussing avoiding using LINQ to query sql server. The 
preferred method of querying discussed was either to use direct SQL calls or 
stored procs to perform data manipulation.

This was because the overhead of starting up Entity Framework is significant 
and the underlying queries produced by LINQ can be quite convoluted and 
inefficient. Lazy loading is also something to be avoided (at the very least 
you should be using Eager loading – which forced you to be explicit about what 
related data is being included/loaded. As an aside, I’ve also seen a massive 
performance drop when using mappers to covert database objects in EF to POCO 
objects using tools such as AutoMapper.)

Add to this, that putting some business logic in stored procs is about the most 
efficient way to perform data manipulation in a SQL Server database. It is 
unbelievably fast and efficient compared to passing all the data over the wire 
to your middle tier to perform any updates and then passing it back to commit 
the data to the database.

In fact, I would argue that the very fact that current “best practice” is to 
inefficiently pass all your data to the middle-tier to be modified, only to be 
returned to the database for the update, is a failure in modern development, 
but of course, there is not really an alternative if your intent is to 
performing proper unit testing. It is a very sad thing that modern enterprise 
development has not worked out how to utilise the full power of SQL Server 
other than to say "only use stored procs in special cases."

So the question I have is, if it was decided to put business logic in stored 
procedures (and some of you have, I know, even though a few of you with the 
purist hat would deny it!), how do people currently unit test their stored 
procs?

Kind regards,
Tony


RE: Logo for Ozdotnet

2017-04-10 Thread
Or still use 4 colours but divide the states naturally based on the boundaries?

One of the things I liked about the logo we had done was the kangaroo. That 
makes it subtle but obviously Australian.

Could we do something similar? Opera house or other shape immediately 
recognisable overseas as Australian?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Gardiner
Sent: Tuesday, 11 April 2017 9:49 AM
To: ozDotNet 
Subject: Re: Logo for Ozdotnet

I think these are good. I'll overlook my state being divided in two :-)
Wondering about the colours though - they're presumably based on the Microsoft 
logo colours. Would it be better to have them in the same order (eg. clockwise 
from top-left red, green, yellow, blue) ? Or is it intentional to not do that 
to avoid legal issues? Or maybe have them upside down as a nod to being 'down 
under'?
David

On 10 April 2017 at 21:30, Stephen Price 
> wrote:
Passed feedback to Katie and here's what she came up with.
Three versions, a dark, light and a banner version.

[cid:image001.png@01D2B2AC.D9657630][cid:image002.png@01D2B2AC.D9657630][cid:image003.png@01D2B2AC.D9657630]

I really love these ones, think she'd done a great job.

cheers
Stephen



From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of David Connors >
Sent: Monday, 10 April 2017 3:56:58 PM
To: ozDotNet
Subject: Re: Logo for Ozdotnet

I'd make the text the logo. Greg's SQL downunder logo is a good model ... 
http://www.sqldownunder.com/SQLDownUnderSquareLogo2.jpg

On Mon, 10 Apr 2017 at 17:49 Stephen Price 
> wrote:

Yeah, I think it needs to be accurate and not confusing.



I'll get her to make it all on one line (with and without the .com on the end).

I like the .com as then it can be used as advertising. Can always have two 
versions I guess. One with and one without the .com.



We also have ozdotnet.com.au and more recently 
ozdotnet.io. (because some dumbarse thought that would be a 
much better domain name... sounds geeky)



So far option 6 is winning.


From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of David Richards 
>
Sent: Monday, 10 April 2017 3:43:47 PM
To: ozDotNet
Subject: Re: Logo for Ozdotnet

#6 is the best f those presented.  Some suggestions:
- I think "oz.net.com" is just unclear.
- We probably don't need the ".com" as part of a logo.  The actual website can 
be additional text below the logo on any promotional material rather than part 
of the logo.
- Did you try "ozdotnet" on a single line?

David

"If we can hit that bullseye, the rest of the dominoes
 will fall like a house of cards... checkmate!"
 -Zapp Brannigan, Futurama

On 10 April 2017 at 16:48, Ian Thomas 
> wrote:
#2 – I like the demarcation of the states. Also, it looks OK in monochrome.


Ian Thomas
Albert Park, Victoria 3206 Australia

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Monday, 10 April 2017 3:59 PM
To: ozDotNet >
Subject: Logo for Ozdotnet


Hey all,



My daughter, Katie, has given me a few logo designs based on what I asked for.



Plan is to put this on the website/forum (if/when that goes up) and make 
stickers and tshirts available (probably RedBubble). Turn it into something 
people can promote and get a bit of new blood.


Feedback welcome.

cheers
Stephen
p.s. personally I like 06, but have asked if she could have white text with a 
black outline (to make it easier to read). Also liked the OZ characters being 
slightly larger.




--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363



RE: Ozdotnet list

2017-04-04 Thread
Same. I find it interesting to hear local opinions, and to follow Greg K's 
journeys 

I'm always hesitant when I see email lists move to anything that looks like a 
forum. That seems to be where discussions are organised but where they go to 
die.

Regards,
 
Greg
 
Dr Greg Low
 
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com |http://greglow.me

-Original Message-
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Les Hughes
Sent: Tuesday, 4 April 2017 8:52 PM
To: ozdotnet@ozdotnet.com
Subject: Re: Ozdotnet list

I still read occasionally, if only to see what Greg K is trying to tame next ;)

Looking through my emails, I joined Stanski's list in 2002 (15 years!? 
eeep!) and moved across when the sever was in death-throes and Connors stepped 
up. I see a few people still posting from when I started, which is a decent 
track record given the period of time.

I'm hesitant in changing anything, just because I likely wouldn't log into a 
website or whatever else to view the content, and I suspect that old habits die 
hard for many others.

Cheers everyone :)

Les

  On 04/04/17 15:18, Tony Wright wrote:
> Yep, all good if that's the case
>
> On 4 Apr 2017 1:21 PM, "Stephen Price"  > wrote:
>
> Yep, If we keep the actual list email address consistent then all we
> are changing is the implementation of said list.
>
> I think it's a good idea to keep the primary function of the elist
> as it is. Anyone currently subscribed to the list will be on the
> replacement. It should be the same list, just delivered by a
> different backend.
>
> They can remove themselves if they decide they don't like it
> (perhaps more traffic isn't what they want, or it's not applicable
> to them anymore and they forgot they were on it... whatever the
> reason).
>
>
> 
> *From:* ozdotnet-boun...@ozdotnet.com
> 
>  > on behalf of David Connors
> >
> *Sent:* Tuesday, 4 April 2017 11:08:03 AM
> *To:* ozDotNet
> *Subject:* Re: Ozdotnet list
> On Tue, 4 Apr 2017 at 13:06 Tony Wright  > wrote:
>
> That does worry me. First, I don't agree with broadening the
> scope of the list as it benefits from being niche. Secondly,
> moving to a new environment, while exciting, could spell the end
> of the list as many spectators won't bother making the move across.
>
>
> Probably pretty low risk if it can still function as an email list
> (which doco says it does).
>
>
> --
> David Connors
> da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363
>



RE: XML files served by Azure Websites

2017-03-01 Thread
All good thanks folks. I think that was just feedvalidator being picky.

I still had an issue with iTunes though as they now require byte range support. 
That required changing the headers in Azure storage, which does support them 
but reports that it doesn’t if you’re on the old API.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of noonie
Sent: Wednesday, 1 March 2017 8:10 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: XML files served by Azure Websites

Greg,

This discussion:-

http://stackoverflow.com/questions/4832357/whats-the-difference-between-text-xml-vs-application-xml-for-webservice-respons

Seems to indicate that it's more a client issue. Your server response header is 
setting the content type to text/xml but not the charset but and though that 
should be good enough for modern clients, that read the xml document encoding 
and honour it, some might  still use the default us-ascii. It may be possible  
that the feed validator is is just being "picky".

IIS should let you set the charset on that content type so the feed validates.

https://forums.iis.net/t/1155439.aspx

--
noonie


On 1 March 2017 at 19:04, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
But that still leaves the question on how to change that. It's just serving up 
a static xml file. How is the content type for that specified? And more 
importantly, where?
Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Bill McCarthy 
<bill.mccarthy.li...@live.com.au<mailto:bill.mccarthy.li...@live.com.au>>
Sent: Wednesday, March 1, 2017 5:32:06 PM

To: ozDotNet
Subject: RE: XML files served by Azure Websites

Just looked at feedvalidator.org<http://feedvalidator.org> .  Look at the help 
link:
http://www.feedvalidator.org/docs/warning/EncodingMismatch.html

your site is serving up response content type: text/xml


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Low (??)
Sent: Wednesday, 1 March 2017 4:55 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: XML files served by Azure Websites

Yes I did think BOM was on UTF-16. Either way, issue seems to be the header 
from the site. No idea where to set it. I'm suspecting that the lack of a value 
probably sends this as a default. Can't find ASCII mentioned anywhere in 
project files.
Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Bill McCarthy 
<bill.mccarthy.li...@live.com.au<mailto:bill.mccarthy.li...@live.com.au>>
Sent: Wednesday, March 1, 2017 3:05:00 PM
To: ozDotNet
Subject: RE: XML files served by Azure Websites

Thought it was the other way around and that BOM was unnecessary for utf-8.
To me Greg’s problem looks like the server is sending a response block saying 
the content type is asci, then send an xml file which is utf-8.  Would have to 
do old school spit out bytes to test as I doubt any text editor would permit 
the file to be saved as ascii as it would be invalid ascii file

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:55 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: XML files served by Azure Websites

On Wed, 1 Mar 2017 at 13:41 Bill McCarthy 
<bill.mccarthy.li...@live.com.au<mailto:bill.mccarthy.li...@live.com.au>> wrote:
The file itself is utf-8, or unicode due to special characters in it, eg Lòpez
So problem is not with the file.

No, a UTF-8 stream is defined as such by a byte order marker at the start of 
the stream. You can have UTF-8 files composed entirely of ASCII characters.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363<tel:+61%20417%20189%20363>
--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363



Re: XML files served by Azure Websites

2017-03-01 Thread
But that still leaves the question on how to change that. It's just serving up 
a static xml file. How is the content type for that specified? And more 
importantly, where?

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com


From: ozdotnet-boun...@ozdotnet.com  on behalf 
of Bill McCarthy 
Sent: Wednesday, March 1, 2017 5:32:06 PM
To: ozDotNet
Subject: RE: XML files served by Azure Websites

Just looked at feedvalidator.org .  Look at the help link:
http://www.feedvalidator.org/docs/warning/EncodingMismatch.html

your site is serving up response content type: text/xml


From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Wednesday, 1 March 2017 4:55 PM
To: ozDotNet 
Subject: Re: XML files served by Azure Websites

Yes I did think BOM was on UTF-16. Either way, issue seems to be the header 
from the site. No idea where to set it. I'm suspecting that the lack of a value 
probably sends this as a default. Can't find ASCII mentioned anywhere in 
project files.
Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com


From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Bill McCarthy 
>
Sent: Wednesday, March 1, 2017 3:05:00 PM
To: ozDotNet
Subject: RE: XML files served by Azure Websites

Thought it was the other way around and that BOM was unnecessary for utf-8.
To me Greg’s problem looks like the server is sending a response block saying 
the content type is asci, then send an xml file which is utf-8.  Would have to 
do old school spit out bytes to test as I doubt any text editor would permit 
the file to be saved as ascii as it would be invalid ascii file

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:55 PM
To: ozDotNet >
Subject: Re: XML files served by Azure Websites

On Wed, 1 Mar 2017 at 13:41 Bill McCarthy 
> wrote:
The file itself is utf-8, or unicode due to special characters in it, eg Lòpez
So problem is not with the file.

No, a UTF-8 stream is defined as such by a byte order marker at the start of 
the stream. You can have UTF-8 files composed entirely of ASCII characters.

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363
--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363


Re: XML files served by Azure Websites

2017-02-28 Thread
Yes I did think BOM was on UTF-16. Either way, issue seems to be the header 
from the site. No idea where to set it. I'm suspecting that the lack of a value 
probably sends this as a default. Can't find ASCII mentioned anywhere in 
project files.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com


From: ozdotnet-boun...@ozdotnet.com  on behalf 
of Bill McCarthy 
Sent: Wednesday, March 1, 2017 3:05:00 PM
To: ozDotNet
Subject: RE: XML files served by Azure Websites

Thought it was the other way around and that BOM was unnecessary for utf-8.
To me Greg’s problem looks like the server is sending a response block saying 
the content type is asci, then send an xml file which is utf-8.  Would have to 
do old school spit out bytes to test as I doubt any text editor would permit 
the file to be saved as ascii as it would be invalid ascii file

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:55 PM
To: ozDotNet 
Subject: Re: XML files served by Azure Websites

On Wed, 1 Mar 2017 at 13:41 Bill McCarthy 
> wrote:
The file itself is utf-8, or unicode due to special characters in it, eg Lòpez
So problem is not with the file.

No, a UTF-8 stream is defined as such by a byte order marker at the start of 
the stream. You can have UTF-8 files composed entirely of ASCII characters.

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363
--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363


RE: XML files served by Azure Websites

2017-02-28 Thread
feedvalidator.org

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:19 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: XML files served by Azure Websites

What validator are you using?

On Wed, 1 Mar 2017 at 13:11 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I resaved the file, specifying the UTF-8 encoding, but still says the same. I 
think the file already was but perhaps not.

Here’s the link: http://www.sqldownunder.com/SQLDownUnderMP3Feed.xml


Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:05 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: XML files served by Azure Websites

On Wed, 1 Mar 2017 at 12:44 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Our podcast feed is served as an XML file from our Azure website. Feed 
validator returns this:

Your feed appears to be encoded as "UTF-8", but your server is reporting 
"US-ASCII"


Is the file actually UTF-8 or is it an ASCII file that has a UTF-8 attribute in 
the XML definition. There should be a byte order mark at the start of the file.


David.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363<tel:+61%20417%20189%20363>
--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363


RE: XML files served by Azure Websites

2017-02-28 Thread
I resaved the file, specifying the UTF-8 encoding, but still says the same. I 
think the file already was but perhaps not.

Here’s the link: http://www.sqldownunder.com/SQLDownUnderMP3Feed.xml


Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Wednesday, 1 March 2017 2:05 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: XML files served by Azure Websites

On Wed, 1 Mar 2017 at 12:44 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Our podcast feed is served as an XML file from our Azure website. Feed 
validator returns this:

Your feed appears to be encoded as "UTF-8", but your server is reporting 
"US-ASCII"


Is the file actually UTF-8 or is it an ASCII file that has a UTF-8 attribute in 
the XML definition. There should be a byte order mark at the start of the file.


David.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363


XML files served by Azure Websites

2017-02-28 Thread
Hi Brains Trust,

Our podcast feed is served as an XML file from our Azure website. Feed 
validator returns this:

Your feed appears to be encoded as "UTF-8", but your server is reporting 
"US-ASCII"

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me



RE: Used Azure SQL DB? Why or why not?

2017-01-31 Thread
Hi Glav,

One caught my eye there. Can’t admit to like using schemas for tenants. I live 
in a world where “performance aside” isn’t an aside. That has way to much 
impact on query plans, caching, memory, etc. for my liking.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Paul Glavich
Sent: Saturday, 28 January 2017 3:58 PM
To: 'ozDotNet' 
Subject: RE: Used Azure SQL DB? Why or why not?

Hey Greg,

Use it all the time, and am working with a customer which is a greenfield 
project.

Things to note:

· Getting a good idea of performance related to what size/number of 
DTU’s. Initially, it is a pretty rough guess at best of times. Also, assuming 
all the queries written against it are good (which often is not the case) makes 
it harder to properly estimate. Over time and with adequate testing this issue 
becomes less though.

· Retry with exponential fall off pattern. EF has a strategy to do this 
BUT doesn’t support transactions. Want to use a transaction? Then disable the 
retry/fall off policy and do your own. Can use something like Polly also to do 
this but it is an extra.

· Syncing data between azure sql and an on premise sql. There are 
options but I think SQL Azure data sync is mostly it. If it doesn’t work well 
with that, well, make it up from there.

· Customer initially started using a central SQL Dev DB. Caused all 
sorts of pain. I created a set of migration scripts so that Db can be run 
locally, with migration scripts for SQL in Azure.

· Migrating thought process from multiple databases to a single Db with 
multiple schemas. Not that you can’t use multiple databases, but it is mostly 
easier (especially for migration scripts) to operate on one DB (performance 
aside).

Probably a few others, but that is a brain dump for now.
Also, I will be seeing you at ignite as I got asked to do a preso only recently.

See you there ☺


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Saturday, 28 January 2017 1:20 PM
To: ozDotNet >
Subject: Used Azure SQL DB? Why or why not?

To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me



RE: Used Azure SQL DB? Why or why not?

2017-01-30 Thread
Sounds like “we know nothing about." is a big part of the issue…

On their worst day, they do a better job of this than any company I’ve ever 
been to. (And I’ve been to plenty)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Tuesday, 31 January 2017 3:33 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Used Azure SQL DB? Why or why not?

Another argument we got from the DBAs was "we are responsible for the integrity 
of the data so we're not relying on some thing in the cloud we know nothing 
about."

On Tue, Jan 31, 2017 at 1:36 PM, DotNet Dude 
<adotnetd...@gmail.com<mailto:adotnetd...@gmail.com>> wrote:
They are definitely the same ones who are the gatekeepers for the decisions. 
I've seen that the most. Protecting their jobs and trying to maintain the 
"seniority by age" mindset in the company. I've seen some gun devs just leave 
because they couldn't be bothered dealing with the nonsense.

On Tue, Jan 31, 2017 at 1:00 PM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
So you think it is just job protection?

Are the people protecting their jobs the same ones who are gatekeepers for the 
decisions?
Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of DotNet Dude <adotnetd...@gmail.com<mailto:adotnetd...@gmail.com>>
Sent: Tuesday, January 31, 2017 7:22:25 AM
To: ozDotNet

Subject: Re: Used Azure SQL DB? Why or why not?

The arguments are everything they can come up with even if they're not true. 
Eg. Pricing. It's just people who have been here the longest trying to keep 
things unchanged and keeping their jobs and super high salaries.

On Saturday, 28 January 2017, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I can guess, but what type of politics? What are the arguments?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of DotNet Dude
Sent: Saturday, 28 January 2017 12:44 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Used Azure SQL DB? Why or why not?

Everything Tony said plus politics :p

On Saturday, 28 January 2017, Tony Wright 
<tonyw...@gmail.com<mailto:tonyw...@gmail.com>> wrote:
Hi Greg,

The main thing I think stopping us has been on premises sql or dev edition sql. 
It just doesn't make sense to rely on the stability of the internet when 
developing, and an existing environment or dev edition is very little cost.

The other issue is that it ends up in an account belonging to a single person 
rather than being an organisational account.

The places where we've used Azure sql is when we've all wanted to all be able 
to access the database remotely with simplicity.

The main business driver for using sql Azure as opposed to on premises sql had 
been more about wanting sql to operate in a DMZ, nowhere near the 
organisation's confidential on premises data.

That said, we've just moved one application to using Windows Azure (started 
with table storage, moved to blob storage) simply because of the significant 
drop in cost of data.

Regards Tony

On 28 Jan 2017 1:19 PM, "Greg Low (罗格雷格博士)" 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>





RE: Used Azure SQL DB? Why or why not?

2017-01-30 Thread
g rid of it soon. I'll buy one of those quiet little media 
boxes to hold my gigawatts of music and videos.

Greg K

On 28 January 2017 at 13:30, Tony Wright 
<tonyw...@gmail.com<mailto:tonyw...@gmail.com>> wrote:
Hi Greg,

The main thing I think stopping us has been on premises sql or dev edition sql. 
It just doesn't make sense to rely on the stability of the internet when 
developing, and an existing environment or dev edition is very little cost.

The other issue is that it ends up in an account belonging to a single person 
rather than being an organisational account.

The places where we've used Azure sql is when we've all wanted to all be able 
to access the database remotely with simplicity.

The main business driver for using sql Azure as opposed to on premises sql had 
been more about wanting sql to operate in a DMZ, nowhere near the 
organisation's confidential on premises data.

That said, we've just moved one application to using Windows Azure (started 
with table storage, moved to blob storage) simply because of the significant 
drop in cost of data.

Regards Tony

On 28 Jan 2017 1:19 PM, "Greg Low (罗格雷格博士)" 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775<tel:1300%20775%20775>) office | +61 
419201410<tel:+61%20419%20201%20410> mobile│ +61 3 8676 
4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>





Re: Used Azure SQL DB? Why or why not?

2017-01-30 Thread
So you think it is just job protection?

Are the people protecting their jobs the same ones who are gatekeepers for the 
decisions?

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>


From: ozdotnet-boun...@ozdotnet.com <ozdotnet-boun...@ozdotnet.com> on behalf 
of DotNet Dude <adotnetd...@gmail.com>
Sent: Tuesday, January 31, 2017 7:22:25 AM
To: ozDotNet
Subject: Re: Used Azure SQL DB? Why or why not?

The arguments are everything they can come up with even if they're not true. 
Eg. Pricing. It's just people who have been here the longest trying to keep 
things unchanged and keeping their jobs and super high salaries.

On Saturday, 28 January 2017, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I can guess, but what type of politics? What are the arguments?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Saturday, 28 January 2017 12:44 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Used Azure SQL DB? Why or why not?

Everything Tony said plus politics :p

On Saturday, 28 January 2017, Tony Wright <tonyw...@gmail.com> wrote:
Hi Greg,

The main thing I think stopping us has been on premises sql or dev edition sql. 
It just doesn't make sense to rely on the stability of the internet when 
developing, and an existing environment or dev edition is very little cost.

The other issue is that it ends up in an account belonging to a single person 
rather than being an organisational account.

The places where we've used Azure sql is when we've all wanted to all be able 
to access the database remotely with simplicity.

The main business driver for using sql Azure as opposed to on premises sql had 
been more about wanting sql to operate in a DMZ, nowhere near the 
organisation's confidential on premises data.

That said, we've just moved one application to using Windows Azure (started 
with table storage, moved to blob storage) simply because of the significant 
drop in cost of data.

Regards Tony

On 28 Jan 2017 1:19 PM, "Greg Low (罗格雷格博士)" <g...@greglow.com> wrote:
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>



RE: Used Azure SQL DB? Why or why not?

2017-01-27 Thread
Hi Tony,

Why does it end up with someone’s account rather than an org account? Is that 
because of MSDN credits being used or something?

Is Internet stability a big issue where you work?

I’m also guessing they don’t have a geographically distributed developer team? 
(Or they all VPN/Citrix in or something?)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Saturday, 28 January 2017 12:31 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Used Azure SQL DB? Why or why not?

Hi Greg,

The main thing I think stopping us has been on premises sql or dev edition sql. 
It just doesn't make sense to rely on the stability of the internet when 
developing, and an existing environment or dev edition is very little cost.

The other issue is that it ends up in an account belonging to a single person 
rather than being an organisational account.

The places where we've used Azure sql is when we've all wanted to all be able 
to access the database remotely with simplicity.

The main business driver for using sql Azure as opposed to on premises sql had 
been more about wanting sql to operate in a DMZ, nowhere near the 
organisation's confidential on premises data.

That said, we've just moved one application to using Windows Azure (started 
with table storage, moved to blob storage) simply because of the significant 
drop in cost of data.

Regards Tony

On 28 Jan 2017 1:19 PM, "Greg Low (罗格雷格博士)" 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>



RE: Used Azure SQL DB? Why or why not?

2017-01-27 Thread
I can guess, but what type of politics? What are the arguments?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Saturday, 28 January 2017 12:44 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Used Azure SQL DB? Why or why not?

Everything Tony said plus politics :p

On Saturday, 28 January 2017, Tony Wright 
<tonyw...@gmail.com<mailto:tonyw...@gmail.com>> wrote:
Hi Greg,

The main thing I think stopping us has been on premises sql or dev edition sql. 
It just doesn't make sense to rely on the stability of the internet when 
developing, and an existing environment or dev edition is very little cost.

The other issue is that it ends up in an account belonging to a single person 
rather than being an organisational account.

The places where we've used Azure sql is when we've all wanted to all be able 
to access the database remotely with simplicity.

The main business driver for using sql Azure as opposed to on premises sql had 
been more about wanting sql to operate in a DMZ, nowhere near the 
organisation's confidential on premises data.

That said, we've just moved one application to using Windows Azure (started 
with table storage, moved to blob storage) simply because of the significant 
drop in cost of data.

Regards Tony

On 28 Jan 2017 1:19 PM, "Greg Low (罗格雷格博士)" 
<g...@greglow.com<javascript:_e(%7B%7D,'cvml','g...@greglow.com');>> wrote:
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>



Used Azure SQL DB? Why or why not?

2017-01-27 Thread
To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me



RE: Win PE Boot USB

2017-01-14 Thread
Thanks David. I might use diskpart to format them, create the partition and 
make it active before doing the other bits.

It might not be setting it active. I might check that first.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Saturday, 14 January 2017 7:56 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Win PE Boot USB

Cherish the one that works. That is what I do with them. Old ones fail - no 
idea the reason why but I always blame UEFI for everything.

I have more success with this manual process than any tool: 
https://www.thomas-krenn.com/en/wiki/Creating_Windows_UEFI_Boot-Stick_in_Windows

On Sat, 14 Jan 2017 at 16:50 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Hi Folks,

One for the local brains trust:

I create WinPE boot disks on three different types of USB sticks. One type 
boots afterwards (every time), the other two types doesn’t. I’m struggling with 
what is different.


• I’ve tried them formatted as either FAT32 or NTFS

• I’ve tried removing an existing partition from them first, and 
letting the creation app (in this case Macrium Reflect) create them from scratch

Every time, one type of USB stick works, the other two types don’t. I’m 
struggling to think of what else can be different.

Curiously, when I boot the 2nd type, the laptop returns “No operating system”. 
With the 3rd type, it says “Missing operating system”. I would have expected 
the same words coming back from the same laptop. That has me puzzled too. I 
would have thought those words would be coming back from the BIOS so I’m 
wondering what different path leads to that. (Perhaps it’s from the boot sector 
of the USB sticks).

Any thoughts welcome !

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:+61%20419%20201%20410> 
mobile│ +61 3 8676 4913<tel:+61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363


Win PE Boot USB

2017-01-13 Thread
Hi Folks,

One for the local brains trust:

I create WinPE boot disks on three different types of USB sticks. One type 
boots afterwards (every time), the other two types doesn’t. I’m struggling with 
what is different.


· I’ve tried them formatted as either FAT32 or NTFS

· I’ve tried removing an existing partition from them first, and 
letting the creation app (in this case Macrium Reflect) create them from scratch

Every time, one type of USB stick works, the other two types don’t. I’m 
struggling to think of what else can be different.

Curiously, when I boot the 2nd type, the laptop returns “No operating system”. 
With the 3rd type, it says “Missing operating system”. I would have expected 
the same words coming back from the same laptop. That has me puzzled too. I 
would have thought those words would be coming back from the BIOS so I’m 
wondering what different path leads to that. (Perhaps it’s from the boot sector 
of the USB sticks).

Any thoughts welcome !

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com 
|http://greglow.me



RE: Entity Framework - the lay of the land

2017-01-03 Thread
“ORMs are still a real coding productivity boost,”

Are they though? I see them knock at best 10% off a dev project, and that dev 
work is at best probably 10% of the lifetime cost of the project.

So a 1% overall saving in project cost ends up determining and limiting so many 
aspects of the overall project over its life? Not sure that’s any sort of boost.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Tuesday, 3 January 2017 8:16 PM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

Hi Grant et al,

You're psychic, as I was going to post on this old topic later in the week, as 
I've rejigged my thinking a little in recent months.

I also used CodeSmith to make CRUD for a few good years and I was impressed by 
how easy it was. I used the netTiers templates, not handmade. What I liked 
about netTiers was that the CRUD was basically table-based and not 
over-engineered like many famous ORMs (including EF) and it just threw a really 
handy bridge at the lowest useful level between classes and tables. Maybe even 
David C wouldn't turn his nose up at that?!

Both EF and netTiers support "deep loading" by effortlessly following joins, 
and that's about the only advanced feature of either of them that I ever used.

In recent months in both hobby code and some real apps I faced that choice of 
where to swing the pendulum of manipulating data ... towards the database or 
towards the app code. I have decided that all basic data manipulation like 
WHERE, ORDER, OVER, JOIN, SELECT, etc should be done in stored procs and not in 
the ORM or app code. You just can't beat the performance and clarity of doing 
this in the DB. After all, that's what it's built for! And EF is great for 
simply mapping the procs to methods and DTO classes.

I now put a fence up in my mind to put all basic data manipulation in the DB on 
one side and strictly business logic in the code on the other side. Sometimes 
you have to shred and knit DTOs, but that should be in app code as well.

And Grant's concern about dependency on specific ORMs is quite valid. We have 
one app that heavily used EF v4 and the self-tracking entities, which were 
deprecated, and now we're stuck and can't get to EF6 without industrial effort. 
Imagine trying to completely change your ORM brand.

So in summary I have decided for now that ORMs are still a real coding 
productivity boost, but only when used for basic CRUD and DTOs.

Greg K


Re: [OT] HP Spectre x360 thoughts

2016-12-14 Thread
My E7440 (Dell) has been brilliant but they did replace the screen because of a 
hinge issue - but functionality: wonderful

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com


From: ozdotnet-boun...@ozdotnet.com  on behalf 
of David Gardiner 
Sent: Thursday, December 15, 2016 10:23:04 AM
To: ozDotNet
Subject: Re: [OT] HP Spectre x360 thoughts

Just to add to Tony's experience, I've been using a Dell XPS 15 (9550) since 
March. I did end up getting the battery replaced because it turned out there 
was something weird inside it that was pushing up against the touchpad, making 
it hard to 'click'. After that it's been pretty good - quite reliable. I don't 
make much use of the thunderbolt port though (I've seen a few firmware/driver 
updates come out for that so it does sound like there's been some issues there)

This is the second Dell laptop I've had (first one was a 1645) and one other 
thing I've noticed is that they tend to be pretty good working with various 
data projectors. A number of occasions I've been at a venue where my laptop has 
worked fine where other brands have problems with cropping or just refusing to 
work at all.

David

On 15 December 2016 at 08:09, Tom P 
> wrote:
Wow thanks for the comprehensive email Tony. During my research I actually did 
read about horror stories like yours where people ended up sending machines 
back several times. It's really disappointing when you're spending so much 
money. I know several people who just refuse to deal with Dell now after having 
many issues with them. I'll keep looking...

On Wednesday, 14 December 2016, Tony Wright  wrote:
Hi Tom,

I have been reviewing laptops lately for value for money and decided the 
battery life on the x360 sucked.

Most of the laptops in the $3000 range are dual core as well.

If you're after a 2in1 and dual core is fine you could consider Lenovo thinkpad 
x1 yoga, or the Lenovo yoga 910. Lenovo yoga 910 is consumer and had 7th gen 
Intel chip but no pen capability. Thinkpad x1 yoga has pen but different port 
configuration.

Check ports on all laptops you consider. Thunderbolt ports are best if you can 
get them. Usbc is second best (you can run multiple external monitors + 
Ethernet cable via those ports) But you will also need adapters to fit.

The best value I ended up coming up with was a Dell Xps 15. But I have had 
major issues. They have now replaced my motherboard 3 times due to crashes, 
screen flickering and thunderbolt port failures. Tomorrow they will replace my 
motherboard for the fourth time. Not good enough. If it fails this time, I'm 
getting a refund.

My advice is look for discount codes as well. My son has a student account 
giving him access to discounts on hp (limited selection up to 40%), Dell (15%) 
and Microsoft (15%). Lenovo had up to 20% recently but have removed that deal. 
Lenovo often have other deals. Apple 10% through a student discount. Auto 
clubs, like racv, also have discounts.

If my laptop fails again and I have to buy another laptop, I think I might get 
a Lenovo P50, but they're expensive and not as sexy,but I can get a xeon chip 
or high end quad core, go up to 64gb ram, and put a second nvme pcie ssd of I 
like.

The other laptops I considered were surface book. Didn't like the lack of 
thunderbolt. Apple Macbook pro, which you can install windows natively on. It's 
got an awesome configuration but bad battery life, and that's reduced further 
by windows. Asus Zenbook pro 15 but couldn't find a price for the right 
configuration I want (I only want 1920x1080 as I want more battery life)
Hp omen - lacks extensibility. Dell precision 7510 far too expensive in 
Australia.

Hope this helps!

Tony

On 14 Dec 2016 5:34 PM, "Tom P"  wrote:
Hi Folks,

I'm thinking of buying the HP Spectre x360 13 inch with high specs (16gb ram, 
512 ssd, i7) which ends up costing about $3100 with the warranty. Have any devs 
here had bad experiences with this machine or recommend a better alternative 
for the price?

Cheers
Tom



--
Thanks
Tom



--
Thanks
Tom




Re: [OT] HP Spectre x360 thoughts

2016-12-14 Thread
Yep love the look of the P50 and importantly a 4 channel nvme.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com


From: ozdotnet-boun...@ozdotnet.com  on behalf 
of Tony Wright 
Sent: Wednesday, December 14, 2016 10:16:41 PM
To: ozDotNet
Subject: Re: [OT] HP Spectre x360 thoughts

Hi Tom,

I have been reviewing laptops lately for value for money and decided the 
battery life on the x360 sucked.

Most of the laptops in the $3000 range are dual core as well.

If you're after a 2in1 and dual core is fine you could consider Lenovo thinkpad 
x1 yoga, or the Lenovo yoga 910. Lenovo yoga 910 is consumer and had 7th gen 
Intel chip but no pen capability. Thinkpad x1 yoga has pen but different port 
configuration.

Check ports on all laptops you consider. Thunderbolt ports are best if you can 
get them. Usbc is second best (you can run multiple external monitors + 
Ethernet cable via those ports) But you will also need adapters to fit.

The best value I ended up coming up with was a Dell Xps 15. But I have had 
major issues. They have now replaced my motherboard 3 times due to crashes, 
screen flickering and thunderbolt port failures. Tomorrow they will replace my 
motherboard for the fourth time. Not good enough. If it fails this time, I'm 
getting a refund.

My advice is look for discount codes as well. My son has a student account 
giving him access to discounts on hp (limited selection up to 40%), Dell (15%) 
and Microsoft (15%). Lenovo had up to 20% recently but have removed that deal. 
Lenovo often have other deals. Apple 10% through a student discount. Auto 
clubs, like racv, also have discounts.

If my laptop fails again and I have to buy another laptop, I think I might get 
a Lenovo P50, but they're expensive and not as sexy,but I can get a xeon chip 
or high end quad core, go up to 64gb ram, and put a second nvme pcie ssd of I 
like.

The other laptops I considered were surface book. Didn't like the lack of 
thunderbolt. Apple Macbook pro, which you can install windows natively on. It's 
got an awesome configuration but bad battery life, and that's reduced further 
by windows. Asus Zenbook pro 15 but couldn't find a price for the right 
configuration I want (I only want 1920x1080 as I want more battery life)
Hp omen - lacks extensibility. Dell precision 7510 far too expensive in 
Australia.

Hope this helps!

Tony

On 14 Dec 2016 5:34 PM, "Tom P" 
> wrote:
Hi Folks,

I'm thinking of buying the HP Spectre x360 13 inch with high specs (16gb ram, 
512 ssd, i7) which ends up costing about $3100 with the warranty. Have any devs 
here had bad experiences with this machine or recommend a better alternative 
for the price?

Cheers
Tom



--
Thanks
Tom



Re: [OT] node.js and express

2016-11-27 Thread
Oh yes

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

_
From: David Connors >
Sent: Monday, November 28, 2016 12:15 pm
Subject: Re: [OT] node.js and express
To: ozDotNet >


On Mon, 28 Nov 2016 at 10:56 Scott Barnes 
> wrote:
"...Everytime I see a developer use multi-threading I think to myself, thank 
you for keeping future consultancy billables alive" - Anonymous

Most developers who create messes don't know what a thread is.

Personally, I thank whoever invented ORMs before I go to bed at night.

Performance and Go Live Problems FOR EVERYBODY!

David

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363




RE: [OT] node.js and express

2016-11-24 Thread
Yep, Lightswitch is dead. It was Silverlight based.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nathan Schultz
Sent: Friday, 25 November 2016 2:20 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

Arguably, a productive web-based RAD tool is exactly the sort of niche that 
Microsoft LightSwitch was trying to fill (although I'm pretty certain it's now 
dead). As I said earlier, we use OutSystems here, and I believe it's an area 
that Aurelia.IO and other vendors are growing into as well.

On 25 November 2016 at 11:00, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
But that's exactly the point Scott. Why have we gone so far backwards in 
productivity?
Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Scott Barnes <scott.bar...@gmail.com<mailto:scott.bar...@gmail.com>>
Sent: Friday, November 25, 2016 12:09:38 PM
To: ozDotNet

Subject: Re: [OT] node.js and express

"It Depends" on what tool you're looking at. If all you're doing is staring at 
Visual Studio and that's it and wondering why the world is so hard to develop 
for then that's not a realistic outcome, as despite all the OSS rhetoric, 
Microsoft is still preoccupied with Windows first class citizen approach to 
roadmaps. They'll dip their toes in other platforms but until revenue models 
change, tool -> windows. The rest will just be additive biproduct / bonus 
rounds outside that.

Products like Unity3D and Xamarin were the answer to that question but not as 
"drag-n-drop tab dot ship" as Winforms of old.. those days are well behind us 
now.






---
Regards,
Scott Barnes
http://www.riagenic.com

On Fri, Nov 25, 2016 at 9:54 AM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
So it then comes back to tooling again.

Why can’t I build an app with the ease of a winform app and have it deployed in 
the current environments? Surely the app framework should fix the underlying 
mess and let me code to a uniform clean model.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775<tel:%281300%20775%20775>) office | +61 
419201410<tel:%2B61%20419201410> mobile│ +61 3 8676 
4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Ken Schaefer
Sent: Thursday, 24 November 2016 9:41 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: [OT] node.js and express

I guess the conclusion I would draw from that is not so much that the “web 
world is so much worse because we have to cater for all these clients” as “the 
web world is the only feasible answer to catering for all these clients – it’s 
simply not financially feasible to do it via thick clients”

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Nathan Schultz
Sent: Wednesday, 23 November 2016 5:40 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express

As I said in my first e-mail, (when Greg was wondering what the key drivers 
were for web-development), I said "accessibility". Thick clients are simply not 
transportable.
So the simple answer is, you don't.

On 23 November 2016 at 14:21, Ken Schaefer 
<k...@adopenstatic.com<mailto:k...@adopenstatic.com>> wrote:


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Nathan Schultz
Sent: Wednesday, 23 November 2016 5:10 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express

@Ken, your definition of Technical Debt isn't that different from that of 
Martin Fowler's.
Although I'd say (with some seriousness) that JavaScript is Technical Debt ;-)

I've found many of the things you mention far worse in the web-world (where you 
sometimes have to cate

RE: [OT] node.js and express

2016-11-24 Thread
So it then comes back to tooling again.

Why can’t I build an app with the ease of a winform app and have it deployed in 
the current environments? Surely the app framework should fix the underlying 
mess and let me code to a uniform clean model.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Thursday, 24 November 2016 9:41 PM
To: ozDotNet 
Subject: RE: [OT] node.js and express

I guess the conclusion I would draw from that is not so much that the “web 
world is so much worse because we have to cater for all these clients” as “the 
web world is the only feasible answer to catering for all these clients – it’s 
simply not financially feasible to do it via thick clients”

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Nathan Schultz
Sent: Wednesday, 23 November 2016 5:40 PM
To: ozDotNet >
Subject: Re: [OT] node.js and express

As I said in my first e-mail, (when Greg was wondering what the key drivers 
were for web-development), I said "accessibility". Thick clients are simply not 
transportable.
So the simple answer is, you don't.

On 23 November 2016 at 14:21, Ken Schaefer 
> wrote:


From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nathan Schultz
Sent: Wednesday, 23 November 2016 5:10 PM
To: ozDotNet >
Subject: Re: [OT] node.js and express

@Ken, your definition of Technical Debt isn't that different from that of 
Martin Fowler's.
Although I'd say (with some seriousness) that JavaScript is Technical Debt ;-)

I've found many of the things you mention far worse in the web-world (where you 
sometimes have to cater for everything from a mobile phone to a quadruple 
monitor desk-top, and everything in-between, all with different OS's, software, 
plug-ins, versions, and incompatibilities).

I’m curious to know how you’d cater for this variety of consumers if you were 
to do thick-client development? Wouldn’t that be even more of a dog’s breakfast 
of OSes, development environments/languages, pre-requisites you’d need to ship 
etc?



RE: [OT] node.js and express

2016-11-22 Thread
I’m also seeing diabolical messes in the devops on the web apps too though.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Tuesday, 22 November 2016 3:14 PM
To: ozDotNet 
Subject: RE: [OT] node.js and express

A couple of possible reasons:


-  All the emphasis is on centrally delivered applications (aka web 
based), so that’s where all the innovation and change is happening. It will 
take time for maturity and tooling to catch up.

-  It’s harder to bypass the full technical cost of development when 
something’s centrally delivered. It’s easier to incur “technical debt” when you 
build a little thick-client app – the real cost of the app gets buried in IT 
operations.

Cheers
Ken

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 22 November 2016 2:33 PM
To: ozDotNet >
Subject: RE: [OT] node.js and express

But that’s a centralized vs distributed argument. I understand that. By why 
exactly does a centralized development process have to be orders of magnitude 
slower than a distributed one? I just think the tooling has let us down -> big 
time.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me




RE: [OT] node.js and express

2016-11-21 Thread
Along with an endless fascination with “shiny new things” at every phase of the 
development process.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: Greg Low (罗格雷格博士)
Sent: Tuesday, 22 November 2016 2:33 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: [OT] node.js and express

But that’s a centralized vs distributed argument. I understand that. By why 
exactly does a centralized development process have to be orders of magnitude 
slower than a distributed one? I just think the tooling has let us down -> big 
time.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Ken Schaefer
Sent: Tuesday, 22 November 2016 2:29 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: [OT] node.js and express


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Nathan Schultz
Sent: Tuesday, 22 November 2016 1:53 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express

But many of the same problems persist on the web, and the web has brought 
entirely new challenges. The web has all the same issues of DLL hell - 
different components need different versions of the same component - like the 
other day I saw a project where there required multiple versions of JQuery due 
to different component requirements - and the hacks required to ensure that the 
right version is used for the right components isn't pretty. And apps are no 
longer isolated within the company - you're not developing for a particular SOE 
anymore - but rather a polyglot of devices with different operation systems, 
software, features, sizes, resolutions and capabilities. Testing if anything is 
a longer process than ever before. Licensing for some software is on a per-user 
basis on the web, which brings it's own challenges for anonymous systems. 
Deployment must still go through a full change management process (there are 
still a multitude of things that can go wrong), and when you consider the same 
thing could occur today using one-click deployment, sand-boxed applications, 
and Docker containers, the web doesn't have an ace up its sleeve there either.

In my experience, part of the reason that enterprises prefer stuffing all that 
complexity resolution into the development effort (or getting your vendor to 
take the hit in their development effort) is that it’s project cost. It makes 
it much clearer what the true cost of “application X” is to the business.

If the alternative is passing the cost to BAU/Ops (in terms of managing 
interoperability and keeping an end-user fleet of devices running), it becomes 
far more murky as to what is causing the complexity and how much it’s costing. 
Whoever is funding the project will cut whatever they can (whether it be 
security, sociability testing, monitoring instrumentation), and make it 
Operations problem.

As for “click once” etc. that’s just solving a small technological piece of a 
puzzle. How do I do deployment accounting/licensing etc. via click-once?

My current org is ~40,000 users spread from Sydney to Woomera – do not 
underestimate the complexity of deploying or upgrading anything critical in 
that type of environment: when we used to run a distributed Active Directory 
environment (so that local branches could keep running if the WAN was down), 
simply upgrading AD schema was a 9+ month project, where we ended up auditing 
every DC (around 150 branch ones at the time) to verify that their out-of-band 
management cards were working (about 20 needed replacing, or were not cabled), 
and had to roster tens of support techs to be ready to drive/fly out to a site, 
just in case we had to pull the upgrade process due to something going 
catastrophically wrong. The change had to be done over a long weekend, because 
that was the only time that gave us enough lead time to do an authoritative 
restore. Now, upgrading AD isn’t particularly important to a bank – and if I 
was IT leadership I’d be asking: “why can’t I deploy a core systems upgrade, or 
upgrade online banking during this key window? Why am I upgrading ‘AD’, 
whatever the f*ck that is”

Now, it’s all sitting in our data centres, and we could probably do a scheme 
change overnight if we had too. There are lots of Ops benefits to centralising 
all your core logic and systems.

Cheers
Ken


RE: [OT] node.js and express

2016-11-21 Thread
But that’s a centralized vs distributed argument. I understand that. By why 
exactly does a centralized development process have to be orders of magnitude 
slower than a distributed one? I just think the tooling has let us down -> big 
time.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Tuesday, 22 November 2016 2:29 PM
To: ozDotNet 
Subject: RE: [OT] node.js and express


From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Nathan Schultz
Sent: Tuesday, 22 November 2016 1:53 PM
To: ozDotNet >
Subject: Re: [OT] node.js and express

But many of the same problems persist on the web, and the web has brought 
entirely new challenges. The web has all the same issues of DLL hell - 
different components need different versions of the same component - like the 
other day I saw a project where there required multiple versions of JQuery due 
to different component requirements - and the hacks required to ensure that the 
right version is used for the right components isn't pretty. And apps are no 
longer isolated within the company - you're not developing for a particular SOE 
anymore - but rather a polyglot of devices with different operation systems, 
software, features, sizes, resolutions and capabilities. Testing if anything is 
a longer process than ever before. Licensing for some software is on a per-user 
basis on the web, which brings it's own challenges for anonymous systems. 
Deployment must still go through a full change management process (there are 
still a multitude of things that can go wrong), and when you consider the same 
thing could occur today using one-click deployment, sand-boxed applications, 
and Docker containers, the web doesn't have an ace up its sleeve there either.

In my experience, part of the reason that enterprises prefer stuffing all that 
complexity resolution into the development effort (or getting your vendor to 
take the hit in their development effort) is that it’s project cost. It makes 
it much clearer what the true cost of “application X” is to the business.

If the alternative is passing the cost to BAU/Ops (in terms of managing 
interoperability and keeping an end-user fleet of devices running), it becomes 
far more murky as to what is causing the complexity and how much it’s costing. 
Whoever is funding the project will cut whatever they can (whether it be 
security, sociability testing, monitoring instrumentation), and make it 
Operations problem.

As for “click once” etc. that’s just solving a small technological piece of a 
puzzle. How do I do deployment accounting/licensing etc. via click-once?

My current org is ~40,000 users spread from Sydney to Woomera – do not 
underestimate the complexity of deploying or upgrading anything critical in 
that type of environment: when we used to run a distributed Active Directory 
environment (so that local branches could keep running if the WAN was down), 
simply upgrading AD schema was a 9+ month project, where we ended up auditing 
every DC (around 150 branch ones at the time) to verify that their out-of-band 
management cards were working (about 20 needed replacing, or were not cabled), 
and had to roster tens of support techs to be ready to drive/fly out to a site, 
just in case we had to pull the upgrade process due to something going 
catastrophically wrong. The change had to be done over a long weekend, because 
that was the only time that gave us enough lead time to do an authoritative 
restore. Now, upgrading AD isn’t particularly important to a bank – and if I 
was IT leadership I’d be asking: “why can’t I deploy a core systems upgrade, or 
upgrade online banking during this key window? Why am I upgrading ‘AD’, 
whatever the f*ck that is”

Now, it’s all sitting in our data centres, and we could probably do a scheme 
change overnight if we had too. There are lots of Ops benefits to centralising 
all your core logic and systems.

Cheers
Ken


RE: [OT] node.js and express

2016-11-21 Thread
Hi Tom,

Not suggesting that one is a replacement for the other. Just commenting on the 
productivity loss that has happened over the years. We seem to have replaced 
one mess with a bigger one.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tom Rutter
Sent: Tuesday, 22 November 2016 2:17 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

Dr Greg, the situation you describe below is quite rare from what I've seen. If 
a winform app would suffice then why was it important for the web app to work 
on multiple browsers? Multiple browser support is usually only really needed 
for Internet facing apps. Dev teams usually just tell the users which browser 
internal apps support and just avoid the multi browser support issues.

On Tue, Nov 22, 2016 at 8:33 AM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I’m simply amazed at what we’ve done to ourselves as an industry.

I was on a project a while back. With 12 devs and 7 months’ work, the core 
business web app was created. The guys worked hard. At the end, they were still 
struggling to get it to look right on different browsers.

But in the end, I looked at the outcome and knew in my heart that I could have 
created it as a winform app by myself in around a week.

This is progress?

We started building web apps because the IT people were fed up with trying to 
deploy Windows apps. It wasn’t because users were crying out for a lousy visual 
experience, and apps that throw away their work if they stop using them for the 
session timeout period.

I think we “fixed” the wrong problem.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Stephen Price
Sent: Monday, 21 November 2016 6:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express


Goodness, you are not alone.

I'm more surprised that you are surprised, that's all.



Some links to confirm you are not alone (and some funny, cause it's true, 
reading)

https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f#.cdvrepjwi



https://medium.com/@wob/the-sad-state-of-web-development-1603a861d29f#.kqtp9oyq6



There was a hilarious one written by a Java developer where she all but 
dissolved in tears and screaming... but I can't find it right now. Funny 
because it was pretty spot on, not because a poor soul was suffering.



If this shit was easy, everyone would be doing it. There's job security in the 
pain, somewhere.



cheers

Stephen

p.s. All opinions and beliefs are my own. I'm not sure how they came to be, for 
that I can only blame those I've hung around, in real life and online.


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Greg Keogh <gfke...@gmail.com<mailto:gfke...@gmail.com>>
Sent: Monday, 21 November 2016 2:48:54 PM
To: ozDotNet
Subject: Re: [OT] node.js and express


You're not alone Greg. It's like going back to spaghetti but everyone around me 
doesn't agree.

Thanks heavens someone is sympathetic. I thought I was crazy, but I'm glad to 
know you are too! -- Greg



RE: [OT] node.js and express

2016-11-21 Thread
What do you see as the key drivers Ken?

I can guess as I spend my life in these environments but I’m left wondering if 
we could have solved them a much better way.

We simply haven’t achieved productivity. And I’ll bet if someone is starting to 
build something new today, they can’t even work out what to use. How did we get 
to this?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Tuesday, 22 November 2016 12:15 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: [OT] node.js and express

Typical Devs – all they talk about is how much faster/quicker they can write an 
app in one tech vs. another. As if that’s the only thing that matters. ☺☺ 
(note, smiley faces!)

Development time/cost/effort is generally a small fraction of the cost of 
supporting an app, let alone the cost of supporting a large environment.

Maybe thick-client deployment works well in small(er) environments. It doesn’t 
scale in larger ones. As David alluded too, there were many drivers to moving 
towards web-based applications

Cheers
Ken

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of DotNet Dude
Sent: Tuesday, 22 November 2016 9:15 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express

Totally agree Greg. About 80% of what we are currently building could be done 
in 1/10 of the time using winforms or mvc.

Some of our clients are even TELLING us how to build it using whatever 
technology they've recently heard of. One customer recently asked us to use 
Electron. Did they need cross platform? No. Why force javascript down my team's 
throat when it can be avoided altogether and we can have it done in a week with 
wpf or winforms?!

Many years ago we just did a winforms app and deployed via clickonce. Worked 
well and no complaints in the Intranet environments. I've yet to see a case 
where not using winforms (or wpf) or webforms (or mvc) is worth it in Intranet 
situations.

Internet facing apps is a whole different thing obviously.

On Tuesday, 22 November 2016, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I’m simply amazed at what we’ve done to ourselves as an industry.

I was on a project a while back. With 12 devs and 7 months’ work, the core 
business web app was created. The guys worked hard. At the end, they were still 
struggling to get it to look right on different browsers.

But in the end, I looked at the outcome and knew in my heart that I could have 
created it as a winform app by myself in around a week.

This is progress?

We started building web apps because the IT people were fed up with trying to 
deploy Windows apps. It wasn’t because users were crying out for a lousy visual 
experience, and apps that throw away their work if they stop using them for the 
session timeout period.

I think we “fixed” the wrong problem.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Stephen Price
Sent: Monday, 21 November 2016 6:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] node.js and express


Goodness, you are not alone.

I'm more surprised that you are surprised, that's all.



Some links to confirm you are not alone (and some funny, cause it's true, 
reading)

https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f#.cdvrepjwi



https://medium.com/@wob/the-sad-state-of-web-development-1603a861d29f#.kqtp9oyq6



There was a hilarious one written by a Java developer where she all but 
dissolved in tears and screaming... but I can't find it right now. Funny 
because it was pretty spot on, not because a poor soul was suffering.



If this shit was easy, everyone would be doing it. There's job security in the 
pain, somewhere.



cheers

Stephen

p.s. All opinions and beliefs are my own. I'm not sure how they came to be, for 
that I can only blame those I've hung around, in real life and online.


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Greg Keogh <gfke...@gmail.com<mailto:gfke...@gmail.com>>
Sent: Monday, 21 N

RE: [OT] node.js and express

2016-11-21 Thread
I’m simply amazed at what we’ve done to ourselves as an industry.

I was on a project a while back. With 12 devs and 7 months’ work, the core 
business web app was created. The guys worked hard. At the end, they were still 
struggling to get it to look right on different browsers.

But in the end, I looked at the outcome and knew in my heart that I could have 
created it as a winform app by myself in around a week.

This is progress?

We started building web apps because the IT people were fed up with trying to 
deploy Windows apps. It wasn’t because users were crying out for a lousy visual 
experience, and apps that throw away their work if they stop using them for the 
session timeout period.

I think we “fixed” the wrong problem.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Monday, 21 November 2016 6:59 PM
To: ozDotNet 
Subject: Re: [OT] node.js and express


Goodness, you are not alone.

I'm more surprised that you are surprised, that's all.



Some links to confirm you are not alone (and some funny, cause it's true, 
reading)

https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f#.cdvrepjwi



https://medium.com/@wob/the-sad-state-of-web-development-1603a861d29f#.kqtp9oyq6



There was a hilarious one written by a Java developer where she all but 
dissolved in tears and screaming... but I can't find it right now. Funny 
because it was pretty spot on, not because a poor soul was suffering.



If this shit was easy, everyone would be doing it. There's job security in the 
pain, somewhere.



cheers

Stephen

p.s. All opinions and beliefs are my own. I'm not sure how they came to be, for 
that I can only blame those I've hung around, in real life and online.


From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Greg Keogh >
Sent: Monday, 21 November 2016 2:48:54 PM
To: ozDotNet
Subject: Re: [OT] node.js and express


You're not alone Greg. It's like going back to spaghetti but everyone around me 
doesn't agree.

Thanks heavens someone is sympathetic. I thought I was crazy, but I'm glad to 
know you are too! -- Greg


RE: [OT] Ad tracking and security

2016-10-04 Thread
There was also something about techniques that Google was now deploying to 
circumvent most of the ad blockers as well. I think that involved making their 
advertiser sites look like they aren’t who they actually are, at least at first.

I think this is just going to get murkier and murkier. Like it or not, search 
engine companies are funded by ads. It’s not in Google’s (or similar company’s) 
interests to give you an ad free experience unless you are paying them some 
other way.

Old saying is that if the service is free, you are the product.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: Greg Low (罗格雷格博士)
Sent: Wednesday, 5 October 2016 11:47 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: [OT] Ad tracking and security

I thought I’d read that even common adblocker programs are now deciding which 
ads to let through (ie: who has paid them to be “relevant”). I think “Adblock 
Plus” was the topic of the article.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Wallace Turner
Sent: Wednesday, 5 October 2016 11:38 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: [OT] Ad tracking and security

I had a quick look into this because straight away i thought the scenario Greg 
described wouldnt be possible just with cookies - that is if you go to site-A 
and it creates a cookie how would this cookie then be sent to site-B to 
determine its the same person?

As Ken says It appears that they are able to generate a unique fingerprint:
>>We look for browser type, screen size, active plugin data, active installed 
>>software, font usage, font size, time zones, IP, and countless other unique 
>>ways to correlate machines into unique ID’s  [1]

This same fingerprint is generated on site-A and site-B and they then serve ads 
- you can of course boycott the sites that participate in this ad network (I 
would like to know the scope of the various networks)
I would assume (hope) that adblock or similar prevents the javascript calls to 
the ad networks...


[1]: https://meteora.co/user-tracking-without-cookies/

On Wed, Oct 5, 2016 at 7:57 AM, Ken Schaefer 
<k...@adopenstatic.com<mailto:k...@adopenstatic.com>> wrote:
Lots of ways you can get tracked, from IP address to cookies, to running 
scripts in your browser to get a “fingerprint”
Lots of ways to try to limit this.

Google “how advertisers track you” (or maybe using 
www.DuckDuckGo.com<http://www.DuckDuckGo.com>  might be more apropos)

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Keogh
Sent: Wednesday, 5 October 2016 10:40 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: [OT] Ad tracking and security

Folks, this would normally be a Friday topic, but can someone explain how this 
is possible? ...

Last week my wife purchased some clothes online from 'Tread Store'. This 
morning I was at her PC searching in IE for some technical answers and I 
followed a link to Experts Exchange. In the discussion there I see a large 
flashing banner ad for Tread Store. I deleted a handful of suspicious cookies, 
cleared the cache and went back to the page and the ads are still there.

How the friggin' hell are they doing this? Is it simply by our IP address? If 
so, then there's not much I can do to stop this tracking without using a VPN or 
Tor browsing. This data collection creep is a serious worry. We order clothes, 
food, music, books and PC consumables online, so I presume it's all recorded. I 
also presume that as subscribers to The Age newspaper they are tracking every 
click we make. YouTube also records every video you watch. From this 
information you can produce a pretty good profile of someone you've never met. 
Some of us also voted online ... worried!?

Greg K



RE: [OT] Ad tracking and security

2016-10-04 Thread
I thought I’d read that even common adblocker programs are now deciding which 
ads to let through (ie: who has paid them to be “relevant”). I think “Adblock 
Plus” was the topic of the article.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Wallace Turner
Sent: Wednesday, 5 October 2016 11:38 AM
To: ozDotNet 
Subject: Re: [OT] Ad tracking and security

I had a quick look into this because straight away i thought the scenario Greg 
described wouldnt be possible just with cookies - that is if you go to site-A 
and it creates a cookie how would this cookie then be sent to site-B to 
determine its the same person?

As Ken says It appears that they are able to generate a unique fingerprint:
>>We look for browser type, screen size, active plugin data, active installed 
>>software, font usage, font size, time zones, IP, and countless other unique 
>>ways to correlate machines into unique ID’s  [1]

This same fingerprint is generated on site-A and site-B and they then serve ads 
- you can of course boycott the sites that participate in this ad network (I 
would like to know the scope of the various networks)
I would assume (hope) that adblock or similar prevents the javascript calls to 
the ad networks...


[1]: https://meteora.co/user-tracking-without-cookies/

On Wed, Oct 5, 2016 at 7:57 AM, Ken Schaefer 
> wrote:
Lots of ways you can get tracked, from IP address to cookies, to running 
scripts in your browser to get a “fingerprint”
Lots of ways to try to limit this.

Google “how advertisers track you” (or maybe using 
www.DuckDuckGo.com  might be more apropos)

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Wednesday, 5 October 2016 10:40 AM
To: ozDotNet >
Subject: [OT] Ad tracking and security

Folks, this would normally be a Friday topic, but can someone explain how this 
is possible? ...

Last week my wife purchased some clothes online from 'Tread Store'. This 
morning I was at her PC searching in IE for some technical answers and I 
followed a link to Experts Exchange. In the discussion there I see a large 
flashing banner ad for Tread Store. I deleted a handful of suspicious cookies, 
cleared the cache and went back to the page and the ads are still there.

How the friggin' hell are they doing this? Is it simply by our IP address? If 
so, then there's not much I can do to stop this tracking without using a VPN or 
Tor browsing. This data collection creep is a serious worry. We order clothes, 
food, music, books and PC consumables online, so I presume it's all recorded. I 
also presume that as subscribers to The Age newspaper they are tracking every 
click we make. YouTube also records every video you watch. From this 
information you can produce a pretty good profile of someone you've never met. 
Some of us also voted online ... worried!?

Greg K



RE: Entity Framework - the lay of the land

2016-10-03 Thread
There is a certain sweet irony in creating a SQL object to query, to get around 
a limitation of querying the actual SQL object using the framework no ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Corneliu I. Tusnea
Sent: Tuesday, 4 October 2016 12:36 PM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

Stephen,

My 2 cents without seeing the query.
1. Try to make a view that groups your main table with the detail table to 
calculate that extra status field.
I'd expect that to be quick and easy to do.
2. Change your EF to not query the table + 100 queries for the status but query 
the view.




On Tue, Oct 4, 2016 at 12:29 PM, Stephen Price 
> wrote:

Hey all,



Am looking at optimising an EF query right now, so thought it would be ok to 
hijack this thread. Even if it leads to bagging of EF, I'm ok with that. []



So I have a single table being queried, and I grabbed the query being run via 
SQL Server profiler.

4.5million records in the table. Have an Id field, a year field and an EventId 
field. The rest of the fields are data, so not searching those.

The query being produced is  showing as an sp_execsql and does a where against 
the year field.

The actual query itself takes 1699ms, but the screen takes longer to return the 
result as it then loads the detail of each item so it can show the current 
status of each row. (ie the highest version status is the current, in a related 
status table).

So each query is fast but by the time it loads 100 of them, its made 100 little 
calls which all add up to a long delay to the user.



Options I'm thinking here (looking for validation of my thinking, or new ideas 
outside my database knowledge)

1. Reduce the number of items. Say 20 instead of 100.

2. Get the Status asyncronously. Would need to work out how to do that client 
side but seems viable. Initial list would load in 2 seconds, then statuses at 
the top would load almost right away. Items out of sight (scroll to view them) 
would load later.

3. Single query. Server side query is doing a take(100) to reduce the number of 
results if the search is too broad... which means its possibly prematurely 
resolving the linq query and sending the status lookups individually rather 
than single query.

4. something else. Get rid of EF and hand write SQL. Look for new job because 
didn't deliver on time. []



Feedback, criticism, laughing and pointing all welcomed.

cheers

Stephen


From: ozdotnet-boun...@ozdotnet.com 
> on behalf 
of Kirsten Greed >
Sent: Saturday, 1 October 2016 5:26:33 PM

To: 'ozDotNet'
Subject: RE: Entity Framework - the lay of the land

That makes sense

It would be good to have some guidelines about where the cut over point is.

Also whether solutions like NService Bus could mitigate the use of EF ?



From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Saturday, 1 October 2016 12:40 PM
To: ozDotNet
Subject: RE: Entity Framework - the lay of the land
Agreed but not websites with thousands of concurrent users. The problem is that 
people don’t realise that the same logic doesn’t apply in both areas.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ 
+61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Kirsten Greed
Sent: Saturday, 1 October 2016 6:42 AM
To: 'ozDotNet' >
Subject: RE: Entity Framework - the lay of the land

Caveat: this is for winforms line of business applications.



From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Kirsten Greed
Sent: Saturday, 1 October 2016 6:35 AM
To: 'ozDotNet'
Subject: Entity Framework - the lay of the land
My 2c

Horses for courses

I am using  EF Code first and loving it.

Most of the posts on this thread are about building the thing right.

Yet I am finding that EF Code first helps me a lot with building the right 
thing.

I find 

RE: Entity Framework - the lay of the land

2016-09-30 Thread
Agreed but not websites with thousands of concurrent users. The problem is that 
people don’t realise that the same logic doesn’t apply in both areas.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Kirsten Greed
Sent: Saturday, 1 October 2016 6:42 AM
To: 'ozDotNet' 
Subject: RE: Entity Framework - the lay of the land

Caveat: this is for winforms line of business applications.



From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Kirsten Greed
Sent: Saturday, 1 October 2016 6:35 AM
To: 'ozDotNet'
Subject: Entity Framework - the lay of the land
My 2c

Horses for courses

I am using  EF Code first and loving it.

Most of the posts on this thread are about building the thing right.

Yet I am finding that EF Code first helps me a lot with building the right 
thing.

I find changing the database design is much easier now that I use EF 
Migrations, this helps me stay in a "play" headset, lowering my fear of 
changing the database structure.

There are places where I choose to break into transact-sql, but most of my CRUD 
is done via DevExpress XAF with EF Code first.

My 2c :-)
Kirsten












__ Information from ESET NOD32 Antivirus, version of virus signature 
database 14206 (20160930) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


Re: Entity Framework - the lay of the land

2016-09-21 Thread
ke EF, it's efficient to write queries in if you know what is being 
generated at the database level. I always output the SQL query to the debug 
window so I know what is being passed to the DB.
But if the query is not self-contained and requires a lot of tables, then a 
specific stored procedure should be used.  However, do not update with a stored 
procedure if you are using Entity to read back the values. Do POCO updates and 
read the linked objects and attach them correctly.

Davy.



Si hoc legere scis nimium eruditionis habes.


On Tue, Sep 20, 2016 at 10:03 AM, David Connors 
<da...@connors.com<mailto:da...@connors.com>> wrote:
On Tue, 20 Sep 2016 at 13:59 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I often get coy when I hear comparisons with Stack Overflow, Twitter, Facebook, 
Blog Engines, etc. though.
Most of those platforms are happy to just throw away transactions when the 
going gets heavy.
Also, most of their workloads are read-only and so highly cacheable at every 
layer of whatever architecture you choose.

Once you throw consistency and transaction isolation under the bus shit gets 
pretty easy pretty quick.

David.

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363






Re: Entity Framework - the lay of the land

2016-09-21 Thread
cedure if you are using Entity to read back the values. Do POCO updates and 
read the linked objects and attach them correctly.

Davy.



Si hoc legere scis nimium eruditionis habes.


On Tue, Sep 20, 2016 at 10:03 AM, David Connors 
<da...@connors.com<mailto:da...@connors.com>> wrote:
On Tue, 20 Sep 2016 at 13:59 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I often get coy when I hear comparisons with Stack Overflow, Twitter, Facebook, 
Blog Engines, etc. though.
Most of those platforms are happy to just throw away transactions when the 
going gets heavy.
Also, most of their workloads are read-only and so highly cacheable at every 
layer of whatever architecture you choose.

Once you throw consistency and transaction isolation under the bus shit gets 
pretty easy pretty quick.

David.

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363





RE: Entity Framework - the lay of the land

2016-09-20 Thread
But if you know the ID of something and you want to update it, why do a round 
trip to read it first, then to do another round trip to update it like you 
could have in the first place?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Rhys Jones
Sent: Tuesday, 20 September 2016 8:03 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Entity Framework - the lay of the land


That's still the best way to update something

Get the object first, then update that reference, instead of trying to attach a 
new object with the same Id.

There is a performance hit, but you are updating it's not needed to be quick. 
If your requirement is speed when updating, then you shouldn't be using EF.

Davy


Si hoc legere scis nimium eruditionis habes.


On Tue, Sep 20, 2016 at 11:53 AM, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Have they fixed the update situation yet? I remember that you had to select 
something before you could update it. (At least previously)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of David Rhys Jones
Sent: Tuesday, 20 September 2016 7:20 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Entity Framework - the lay of the land


I've been working with EF now for a few years,  here's a list of what went 
wrong / what went right.

Large public Website

Good:
No complex queries in EF, anything more than a couple of tables and a 
stored procedure is called.
All objects from EF were transformed into new objects for use in the website
Bad:
   The context was shared between processes and thusly began to grow after an 
hour or two, causing a slowdown of EF. Regular flushing solved this
  Updates into the database set the FK property but did not attach the object, 
this resulted in data being correct for a moment, but then overwritten with the 
original values when the savechanges was called.


Large Multinational Bank - Bulk Processing
   Good:
   Most processing was done without EF,
  The website used EF to query the same data.
   Bad:
   Framework implemented IEnumerable as each interface, thus   
service.GetClients().Count()  resulted in the entire table being returned. 
Changing the interface to IQueryable allowed the DB to do a count(*)

Large Multinational,  low use public website.
   Good:
  EF context is queried and disposed of as soon as possible, leaving the 
website responsive
   Bad:
 Bad design of the database has resulted in needless queries bringing back 
data that is not used. All EF generated queries are complicated.
 A mixture of stored procedures and EF context is used within a process 
resulting in incorrect values.


I quite like EF, it's efficient to write queries in if you know what is being 
generated at the database level. I always output the SQL query to the debug 
window so I know what is being passed to the DB.
But if the query is not self-contained and requires a lot of tables, then a 
specific stored procedure should be used.  However, do not update with a stored 
procedure if you are using Entity to read back the values. Do POCO updates and 
read the linked objects and attach them correctly.

Davy.



Si hoc legere scis nimium eruditionis habes.


On Tue, Sep 20, 2016 at 10:03 AM, David Connors 
<da...@connors.com<mailto:da...@connors.com>> wrote:
On Tue, 20 Sep 2016 at 13:59 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I often get coy when I hear comparisons with Stack Overflow, Twitter, Facebook, 
Blog Engines, etc. though.
Most of those platforms are happy to just throw away transactions when the 
going gets heavy.
Also, most of their workloads are read-only and so highly cacheable at every 
layer of whatever architecture you choose.

Once you throw consistency and transaction isolation under the bus shit gets 
pretty easy pretty quick.

David.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363<tel:%2B61%20417%20189%20363>




RE: Entity Framework - the lay of the land

2016-09-20 Thread
Have they fixed the update situation yet? I remember that you had to select 
something before you could update it. (At least previously)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Rhys Jones
Sent: Tuesday, 20 September 2016 7:20 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Entity Framework - the lay of the land


I've been working with EF now for a few years,  here's a list of what went 
wrong / what went right.

Large public Website

Good:
No complex queries in EF, anything more than a couple of tables and a 
stored procedure is called.
All objects from EF were transformed into new objects for use in the website
Bad:
   The context was shared between processes and thusly began to grow after an 
hour or two, causing a slowdown of EF. Regular flushing solved this
  Updates into the database set the FK property but did not attach the object, 
this resulted in data being correct for a moment, but then overwritten with the 
original values when the savechanges was called.


Large Multinational Bank - Bulk Processing
   Good:
   Most processing was done without EF,
  The website used EF to query the same data.
   Bad:
   Framework implemented IEnumerable as each interface, thus   
service.GetClients().Count()  resulted in the entire table being returned. 
Changing the interface to IQueryable allowed the DB to do a count(*)

Large Multinational,  low use public website.
   Good:
  EF context is queried and disposed of as soon as possible, leaving the 
website responsive
   Bad:
 Bad design of the database has resulted in needless queries bringing back 
data that is not used. All EF generated queries are complicated.
 A mixture of stored procedures and EF context is used within a process 
resulting in incorrect values.


I quite like EF, it's efficient to write queries in if you know what is being 
generated at the database level. I always output the SQL query to the debug 
window so I know what is being passed to the DB.
But if the query is not self-contained and requires a lot of tables, then a 
specific stored procedure should be used.  However, do not update with a stored 
procedure if you are using Entity to read back the values. Do POCO updates and 
read the linked objects and attach them correctly.

Davy.



Si hoc legere scis nimium eruditionis habes.


On Tue, Sep 20, 2016 at 10:03 AM, David Connors 
<da...@connors.com<mailto:da...@connors.com>> wrote:
On Tue, 20 Sep 2016 at 13:59 Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
I often get coy when I hear comparisons with Stack Overflow, Twitter, Facebook, 
Blog Engines, etc. though.
Most of those platforms are happy to just throw away transactions when the 
going gets heavy.
Also, most of their workloads are read-only and so highly cacheable at every 
layer of whatever architecture you choose.

Once you throw consistency and transaction isolation under the bus shit gets 
pretty easy pretty quick.

David.

--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363



Re: Expired MSDN Subscription - Transfer of VS to a new machine

2016-09-20 Thread
VS Community Edition not enough now? I see many individuals using it.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com




On Tue, Sep 20, 2016 at 5:11 PM +1000, "Chris F" 
> wrote:

Thinking outside the box.

Can you put in a complaint under Australian Consumer Law (ACL).

https://www.accc.gov.au/consumers/complaints-problems/make-a-consumer-complaint

Basically they are failing to honour your original agreement.

So ask for a refund/replacement under ACL.

Cheers,

Chris

On 15 August 2016 at 15:43, David Burstin 
> wrote:
You are correct. I can log in to my MSDN account even though my subscription 
has expired. But every page says "Your subscription has expired" and I can't 
access the product keys that I previously used nor download anything.
Seems like you are in a bind for as long as your dispute with Microsoft remains 
unresolved. I wish I could help you more.

Cheers
Dave

On 15 August 2016 at 15:08, Glen Harvy 
> wrote:
Hi,

I can login to my MSDN account even though the subscription has expired. 
Notwithstanding, it clearly states that a "Product Key" is not required nor is 
one available!

On 15/08/2016 1:10 PM, David Burstin wrote:
>> From memory, if you login to the MSDN subscriber site, you can still get 
>> access to your old keys

My experience has been that once my subscription expired, I could no longer log 
in to the MSDN site to get my previous product keys.

On 15 August 2016 at 13:34, Ken Schaefer 
> wrote:
VS2013 had a static product key �C it just varied depending on the edition you 
wanted to install.

From memory, if you login to the MSDN subscriber site, you can still get access 
to your old keys (though I could be wrong about that)

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Glen Harvy
Sent: Monday, 15 August 2016 11:46 AM
To: ozDotNet >
Subject: Expired MSDN Subscription - Transfer of VS to a new machine

Hi,

I'm currently in a dispute with Microsoft in that I have been unable to 
"activate" my licence as they claim I have "exceeded" my activations.
All I want to do is move my VS2013 installation from my old PC to a new PC.

My MSDN subscription expired some time ago however I'm supposed to have a 
perpetual licence. At least that's how I understood it.

They are also adamant that I need a Product Key. Product Keys have never been 
needed/available for VS2013 and above when acquired via a MSDN subscription.

Has any one else ever run into this brick wall?

Glen.







RE: Entity Framework - the lay of the land

2016-09-19 Thread
When I posted on Facebook about it the other day, another Microsoft friend 
noted that he was going to be the product manager for EF, but commented that he 
managed “to dodge that bullet”.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Scott Barnes
Sent: Tuesday, 20 September 2016 1:42 PM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

Entity Framework was born out of many attempts to solve the DAL tier to enable 
developers to have to avoid talking to DBA's directly. The amount of churn its 
gone through and the level of pain it rewards doesn't seem imho to justify its 
adoption.

As for forgotten child. I can't speak to the program management level but when 
I was in the product management side of things we avoided that clump of code as 
much as possible. It was too hard to build a narrative around and even when we 
managed to wrangle the mess into a coherent strategy they'd turn and flip the 
table over with "i have a better idea on how to solve this pattern.." and sure 
as your google search for "CRUD EntityFramework" the entire blogosphere would 
leave you in the corner, confused and wondering aimlessly as if to say out loud 
"I trusted them, they...they have cheated me for the last time".

I recently watched a Unity3D dev switch to using web-centric .net dev, and he 
died a miserable painful death on Entity Framework code-first. To quote "I went 
to use the migration strategy it left me a broken man, it just doesn't work as 
its advertised".

Its time to put this and PRISM in the "GitHub" graveyard. Say out loud you 
support it but block any future pull requests.


---
Regards,
Scott Barnes
http://www.riagenic.com

On Tue, Sep 20, 2016 at 1:33 PM, Craig van Nieuwkerk 
> wrote:
To give more info, 99% of the CUD was done via NHibernate. For simple select 
queries like for lookup lists was also done via NHibernate, using the built in 
caching and Redis cache, but more complicated queries were straight SQL and 
PetaPoco.

Craig


On Tue, Sep 20, 2016 at 1:30 PM, Craig van Nieuwkerk 
> wrote:
Not EF but have used NHibernate in application, in conjunction with optimised 
SQL where required, and easily supported 1000+ users. But it is very easy to 
stuff it up and find you can't support 5 simultaneous users.

Even StackOverflow before it used Dapper used LinqToSql. Of course, they had to 
optimise and go to Dapper but the LinqToSql version still supported heaps of 
traffic.

Craig

On Tue, Sep 20, 2016 at 1:22 PM, David Apelt 
> wrote:
Thanks everyone for their contributions to my original questions.   I am a 
little surprised about how poor people’s real world experience has been with 
the EF and other ORMs.

A little poll;

Is anyone successfully using EF in a production environment for a non-trivial 
application?  And if yes, then why has yours worked where others have failed.

Regards
Dave A






RE: Entity Framework - the lay of the land

2016-09-19 Thread
Hey Craig,

I often get coy when I hear comparisons with Stack Overflow, Twitter, Facebook, 
Blog Engines, etc. though.

Most of those platforms are happy to just throw away transactions when the 
going gets heavy.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Craig van Nieuwkerk
Sent: Tuesday, 20 September 2016 1:33 PM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

To give more info, 99% of the CUD was done via NHibernate. For simple select 
queries like for lookup lists was also done via NHibernate, using the built in 
caching and Redis cache, but more complicated queries were straight SQL and 
PetaPoco.

Craig

On Tue, Sep 20, 2016 at 1:30 PM, Craig van Nieuwkerk 
> wrote:
Not EF but have used NHibernate in application, in conjunction with optimised 
SQL where required, and easily supported 1000+ users. But it is very easy to 
stuff it up and find you can't support 5 simultaneous users.

Even StackOverflow before it used Dapper used LinqToSql. Of course, they had to 
optimise and go to Dapper but the LinqToSql version still supported heaps of 
traffic.

Craig

On Tue, Sep 20, 2016 at 1:22 PM, David Apelt 
> wrote:
Thanks everyone for their contributions to my original questions.   I am a 
little surprised about how poor people’s real world experience has been with 
the EF and other ORMs.

A little poll;

Is anyone successfully using EF in a production environment for a non-trivial 
application?  And if yes, then why has yours worked where others have failed.

Regards
Dave A





RE: Entity Framework - the lay of the land

2016-09-19 Thread
Agreed Ken. Actually I had another odd one about a year ago with a large bank 
(which bank?).

If I did a transaction search with:

From Date: start of the month
To Date: end of the month

a particular transaction on the last day of the month appeared. But if I 
selected:

From Date: end of the month
To Date: also end of the month

In the same screen, one of the transactions on that end-of-month date 
disappeared.

Turns out that one outcome was based on transaction date, the other was based 
on posting date. But pretty bizarre when you’re only dealing with dates on the 
same screen.

I can imagine what leads to this sort of thing though.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Monday, 19 September 2016 5:15 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Entity Framework - the lay of the land

A large bank (like one of the Big4 in Aus) has a staggering number of 
applications. Even running what you’d think is the simplest product results in 
multiple applications being involved, whether opening an account through to 
day-to-day transacting, especially given the multiple channels that might be 
available.

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Monday, 19 September 2016 3:23 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: Entity Framework - the lay of the land

People always use banks as the canonical example, but I had one at a local bank 
where I went to an ATM and did a transfer “From Account” -> “To Account” where 
both accounts were with the same bank.

Came out of the “from”, and never went into the “to”.

After what seemed like hours on the phone, they told me that “the person who 
had typed in the account number had got it wrong”.

I said “person???” “type”

That’s when they explained to me that their savings system wasn’t really 
connected to their credit card system, and on that afternoon the integration 
link had broken down, so they were printing out the transactions on one and 
typing them into the other. There really was a little person in the ATM.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Stephen Price
Sent: Monday, 19 September 2016 1:50 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Entity Framework - the lay of the land


While on the topic of databases...



I made a flight booking via Altitude points system yesterday and if failed. 
Gave me a number to call during business hours.



Turns out just the return flight was made but nothing charged. That's not very 
atomic hey? []



Hehe love that dialup db connection idea...


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Greg Low (罗格雷格博士) <g...@greglow.com<mailto:g...@greglow.com>>
Sent: Monday, 19 September 2016 11:06:05 AM
To: ozDotNet
Subject: RE: Entity Framework - the lay of the land

I remember many years ago, connecting the devs to the DB via a dial-up 64kB 
modem. Worked wonders for the code that came back. Suddenly they noticed every 
call.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors
Sent: Monday, 19 September 2016 12:34 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Entity Framework - the lay of the land

On Mon, 19 Sep 2016 at 10:38 Greg Keogh 
<gfke...@gmail.com<mailto:gfke...@gmail.com>> wrote:
I had an argument internally that caching was good, with the alternate side 
saying that “cache invalidation” was hard so they never use it.
I think it is "hard" but don't write it off completely. Search for "second 
level cache" and you'll see it's not that trivial to use properly. Some ORMs 
have it as an optional feature. You've got to consider what to cache, eviction 
or expiry policy, concurrency, capacity

RE: Entity Framework - the lay of the land

2016-09-18 Thread
People always use banks as the canonical example, but I had one at a local bank 
where I went to an ATM and did a transfer “From Account” -> “To Account” where 
both accounts were with the same bank.

Came out of the “from”, and never went into the “to”.

After what seemed like hours on the phone, they told me that “the person who 
had typed in the account number had got it wrong”.

I said “person???” “type”

That’s when they explained to me that their savings system wasn’t really 
connected to their credit card system, and on that afternoon the integration 
link had broken down, so they were printing out the transactions on one and 
typing them into the other. There really was a little person in the ATM.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Monday, 19 September 2016 1:50 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Entity Framework - the lay of the land


While on the topic of databases...



I made a flight booking via Altitude points system yesterday and if failed. 
Gave me a number to call during business hours.



Turns out just the return flight was made but nothing charged. That's not very 
atomic hey? []



Hehe love that dialup db connection idea...


From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
<ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf 
of Greg Low (罗格雷格博士) <g...@greglow.com<mailto:g...@greglow.com>>
Sent: Monday, 19 September 2016 11:06:05 AM
To: ozDotNet
Subject: RE: Entity Framework - the lay of the land

I remember many years ago, connecting the devs to the DB via a dial-up 64kB 
modem. Worked wonders for the code that came back. Suddenly they noticed every 
call.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors
Sent: Monday, 19 September 2016 12:34 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Entity Framework - the lay of the land

On Mon, 19 Sep 2016 at 10:38 Greg Keogh 
<gfke...@gmail.com<mailto:gfke...@gmail.com>> wrote:
I had an argument internally that caching was good, with the alternate side 
saying that “cache invalidation” was hard so they never use it.
I think it is "hard" but don't write it off completely. Search for "second 
level cache" and you'll see it's not that trivial to use properly. Some ORMs 
have it as an optional feature. You've got to consider what to cache, eviction 
or expiry policy, concurrency, capacity, etc. I implemented simple caching in a 
server app a long time ago, then about year later I put performance counters 
into the code and discovered that in live use the cache was usually going empty 
before it was accessed, so it was mostly ineffective. Luckily I could tweak it 
into working. So caching is great, but be careful -- GK

I'd argue caching is a good idea so long as it is not a substitute for good 
performance optimisation as you go.

As a general discipline we roll with a rule I call "10x representative data 
load" which means we take whatever we think the final system is going to run 
with for a data set, load each dev with 10x of that on their workstations, and 
make them live that dream.

The reality is that a bit of planning for optimal indexes as well as casting an 
eye over the execution plan after you write each proc isn't a lot of dev 
overhead. At least you know when what you have built rolls out it performs as 
well as it can given other constraints.

David.




--
David Connors
da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 
417 189 363


RE: Entity Framework - the lay of the land

2016-09-18 Thread
I remember many years ago, connecting the devs to the DB via a dial-up 64kB 
modem. Worked wonders for the code that came back. Suddenly they noticed every 
call.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com | 
http://greglow.me

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Monday, 19 September 2016 12:34 PM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

On Mon, 19 Sep 2016 at 10:38 Greg Keogh 
> wrote:
I had an argument internally that caching was good, with the alternate side 
saying that “cache invalidation” was hard so they never use it.
I think it is "hard" but don't write it off completely. Search for "second 
level cache" and you'll see it's not that trivial to use properly. Some ORMs 
have it as an optional feature. You've got to consider what to cache, eviction 
or expiry policy, concurrency, capacity, etc. I implemented simple caching in a 
server app a long time ago, then about year later I put performance counters 
into the code and discovered that in live use the cache was usually going empty 
before it was accessed, so it was mostly ineffective. Luckily I could tweak it 
into working. So caching is great, but be careful -- GK

I'd argue caching is a good idea so long as it is not a substitute for good 
performance optimisation as you go.

As a general discipline we roll with a rule I call "10x representative data 
load" which means we take whatever we think the final system is going to run 
with for a data set, load each dev with 10x of that on their workstations, and 
make them live that dream.

The reality is that a bit of planning for optimal indexes as well as casting an 
eye over the execution plan after you write each proc isn't a lot of dev 
overhead. At least you know when what you have built rolls out it performs as 
well as it can given other constraints.

David.




--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363


Re: Entity Framework - the lay of the land

2016-09-18 Thread
Simple examples are anything many to many. If I have passengers on flights, I 
might have a Flights table, a Passengers table and perhaps some sort of 
FlightManifests table (who's on which flights).

But I sure wouldn't want those three as objects. I'd probably want a Passenger 
object with a Flights collection, and a Flight object with a Passengers 
collection.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com




On Sun, Sep 18, 2016 at 2:28 PM +1000, "Greg Keogh" 
> wrote:

GL

If your table design matches your object design, at least one of them is a poor 
design (again I'm talking about serious apps).

Then there's no hope. Game over man! It was easier for Jeff Goldblum to plug 
his laptop into an alien mothership that it is for coders and DBAs to exchange 
data effectively. Perhaps the relational database is a niche evolutionary 
branch that just gained too much popularity in the last 30 years and is now 
overused or incorrectly used and we all take if for granted. Robust RDBs come 
in all sizes and prices, many free, so they're just everywhere and you use them 
without thinking. Codd might regret his legacy!

You must have experienced many situations where some business data doesn't feel 
right in an RDB and you finish up with self-joins and tricks to mimic 
hierarchies, inheritance or represent temporal data. If other people have 
stumbled into this situation and have opted for an effective non-RDB solution 
then I'm keen to hear what happened.

In light of this whole discussion though, in future I'm going to be more 
careful about bridging the code-to-DB gap. Rather than just lazily spiting out 
wads of ORM generated code and throwing it at the DB, I'm going to consider how 
to use views and procs more effectively to do what they do best.

GK


Re: Entity Framework - the lay of the land

2016-09-18 Thread
:-)

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com




On Sun, Sep 18, 2016 at 5:57 PM +1000, "noonie" 
> wrote:

How to bridge the app/db gap, simple, learn about your enemy & make her your 
friend.

Cooperate, Communicate, Collaborate

Sometimes it works ;-)

--
noonie



On 18 September 2016 at 14:28, Greg Keogh 
> wrote:
GL

If your table design matches your object design, at least one of them is a poor 
design (again I'm talking about serious apps).

Then there's no hope. Game over man! It was easier for Jeff Goldblum to plug 
his laptop into an alien mothership that it is for coders and DBAs to exchange 
data effectively. Perhaps the relational database is a niche evolutionary 
branch that just gained too much popularity in the last 30 years and is now 
overused or incorrectly used and we all take if for granted. Robust RDBs come 
in all sizes and prices, many free, so they're just everywhere and you use them 
without thinking. Codd might regret his legacy!

You must have experienced many situations where some business data doesn't feel 
right in an RDB and you finish up with self-joins and tricks to mimic 
hierarchies, inheritance or represent temporal data. If other people have 
stumbled into this situation and have opted for an effective non-RDB solution 
then I'm keen to hear what happened.

In light of this whole discussion though, in future I'm going to be more 
careful about bridging the code-to-DB gap. Rather than just lazily spiting out 
wads of ORM generated code and throwing it at the DB, I'm going to consider how 
to use views and procs more effectively to do what they do best.

GK



Re: Entity Framework - the lay of the land

2016-09-17 Thread
Three other key aspects of this:

If your table design matches your object design, at least one of them is a poor 
design (again I'm talking about serious apps). Yet most ORMs start with this 
assumption.

If you don't bypass your normal object access paths for high speed operations, 
you'll have serious perf issues. It might seem clean to load up a set of 
entities to filter and paginate them on each call. People who do that keep 
generating work for people like me though.

Finally, caching is your friend. I'm called in all the time to help with 
concurrency and scale issues. The #1 way to get a DB to scale is to stop 
talking to it in the first place.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

_
From: Greg Low (罗格雷格博士) <g...@greglow.com<mailto:g...@greglow.com>>
Sent: Saturday, September 17, 2016 11:11 AM
Subject: RE: Entity Framework - the lay of the land
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>


And if you have two days free on 28th/29th of this month,  come 
and spend those days on starting to get your head around query performance: 
http://www.sqldownunder.com/Training/Courses/3   (And sorry, 
Melbourne only this year. Might get time mid-next year for a Sydney one).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under| Web:www.sqldownunder.com<http://www.sqldownunder.com/> 
|http://greglow.me<http://greglow.me/>

From: Greg Low (罗格雷格博士)
Sent: Saturday, 17 September 2016 11:04 AM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: Entity Framework - the lay of the land

Hey Dave and all,

“The great” -> hardly but thanks Dave.

Look, my issues with many of these ORMs are many. Unfortunately, I spend my 
life on the back end of trying to deal with the messes involved. The following 
are the key issues that I see:

Potentially horrid performance

I’ve been on the back end of this all the time. There are several reasons. One 
is that the frameworks generate horrid code to start with, the second is that 
they are typically quite resistant to improvement, the third is that they tend 
to encourage processing with far too much data movement.

I regularly end up in software houses with major issues that they don’t know 
how to solve. As an example, I was at a start-up software house recently. They 
had had a team of 10 developers building an application for the last four 
years. The business owner said that if it would support 1000 concurrent users, 
they would have a viable business. 5000 would make a good business. 500 they 
might survive. They had their first serious performance test two weeks before 
they had to show the investors. It fell over with 9 concurrent users. The 
management (and in this case the devs too) were panic-stricken.

Another recent example was a software house that had to deliver an app to a 
government department. They were already 4 weeks overdue and couldn’t get it 
out of UAT. They wanted a silver bullet. That’s not the point to then be 
discussing their architectural decisions yet they were the issue.

I was in a large financial in Sydney a while back. They were in the middle of 
removing the ORM that they’d chosen out of their app because try as they might, 
they couldn’t get anywhere near the required performance numbers. Why had they 
called me in? Because before they wrote off 8 months’ work for 240 developers, 
the management wanted another opinion.

Just yesterday I was working on a background processing job that processes a 
certain type of share trades in a UK-based financial service organisation. On a 
system with 48 processors, and 1.2 TB of memory, and 7 x 1 million UK pound 
20TB flash drive arrays, it ran for 48 minutes. During that time, it issued 550 
million SQL batches to be processed and almost nothing else would work well on 
the machine at the same time. The replacement job that we wrote in T-SQL issued 
40,000 SQL batches and ran in just over 2 minutes. I think I can get that to 
1/10 of that with further work. Guess which version is likely to get used now?

Minimal savings yet long term pain

Many of the ORMs give you an initial boost to “getting something done”. But at 
what cost? At best, on most projects that I see, it might save 10% of the 
original development time, on the first project. But as David pointed out in 
his excellent TechEd talk with Joel (and as I’ve seen from so many other 
places), the initial development cost of a project is usually only around 10% 
of the overall development cost. So what are we talking about? Perhaps 1% of 
the whole project? Putting yourself into a long-term restrictive straightjacket 
situation for the sake of a 1% saving is a big, big call. The problem is that 
it’s being 

RE: Entity Framework - the lay of the land

2016-09-16 Thread
And if you have two days free on 28th/29th of this month,  come 
and spend those days on starting to get your head around query performance: 
http://www.sqldownunder.com/Training/Courses/3   (And sorry, 
Melbourne only this year. Might get time mid-next year for a Sydney one).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>

From: Greg Low (罗格雷格博士)
Sent: Saturday, 17 September 2016 11:04 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Entity Framework - the lay of the land

Hey Dave and all,

“The great” -> hardly but thanks Dave.

Look, my issues with many of these ORMs are many. Unfortunately, I spend my 
life on the back end of trying to deal with the messes involved. The following 
are the key issues that I see:

Potentially horrid performance

I’ve been on the back end of this all the time. There are several reasons. One 
is that the frameworks generate horrid code to start with, the second is that 
they are typically quite resistant to improvement, the third is that they tend 
to encourage processing with far too much data movement.

I regularly end up in software houses with major issues that they don’t know 
how to solve. As an example, I was at a start-up software house recently. They 
had had a team of 10 developers building an application for the last four 
years. The business owner said that if it would support 1000 concurrent users, 
they would have a viable business. 5000 would make a good business. 500 they 
might survive. They had their first serious performance test two weeks before 
they had to show the investors. It fell over with 9 concurrent users. The 
management (and in this case the devs too) were panic-stricken.

Another recent example was a software house that had to deliver an app to a 
government department. They were already 4 weeks overdue and couldn’t get it 
out of UAT. They wanted a silver bullet. That’s not the point to then be 
discussing their architectural decisions yet they were the issue.

I was in a large financial in Sydney a while back. They were in the middle of 
removing the ORM that they’d chosen out of their app because try as they might, 
they couldn’t get anywhere near the required performance numbers. Why had they 
called me in? Because before they wrote off 8 months’ work for 240 developers, 
the management wanted another opinion.

Just yesterday I was working on a background processing job that processes a 
certain type of share trades in a UK-based financial service organisation. On a 
system with 48 processors, and 1.2 TB of memory, and 7 x 1 million UK pound 
20TB flash drive arrays, it ran for 48 minutes. During that time, it issued 550 
million SQL batches to be processed and almost nothing else would work well on 
the machine at the same time. The replacement job that we wrote in T-SQL issued 
40,000 SQL batches and ran in just over 2 minutes. I think I can get that to 
1/10 of that with further work. Guess which version is likely to get used now?

Minimal savings yet long term pain

Many of the ORMs give you an initial boost to “getting something done”. But at 
what cost? At best, on most projects that I see, it might save 10% of the 
original development time, on the first project. But as David pointed out in 
his excellent TechEd talk with Joel (and as I’ve seen from so many other 
places), the initial development cost of a project is usually only around 10% 
of the overall development cost. So what are we talking about? Perhaps 1% of 
the whole project? Putting yourself into a long-term restrictive straightjacket 
situation for the sake of a 1% saving is a big, big call. The problem is that 
it’s being decided by someone who isn’t looking at the lifetime cost, and often 
90% of that lifetime cost comes out of someone else’s bucket.

Getting stuck in how it works

For years, code generated by tools like Linq to SQL was very poor. And it knew 
it was talking to SQL Server in the first place. Now imagine that you’re 
generating code and you don’t even know what the DB is. That’s where EF 
started. Very poor choices are often made in these tools. The whole reason that 
“optimize for ad hoc workloads” was added to SQL Server was to deal with the 
mess from the plan cache pollution caused by these tools. A simple example is 
that on the SqlCommand object, they called AddWithValue() to add parameters to 
the parameters collection. That’s a really bad idea. It provides the name of 
the parameter and the value, but no data type. So it used to try to derive the 
data type from the data. So SQL Server would end up with a separate query plan 
for every combination of every length of string for every parameter. And what 
could the developers do to fix it? Nothing. Because it was baked into how the 
framework worked. The framework eventually got changed a bit 

RE: Entity Framework - the lay of the land

2016-09-16 Thread
Hey Dave and all,

“The great” -> hardly but thanks Dave.

Look, my issues with many of these ORMs are many. Unfortunately, I spend my 
life on the back end of trying to deal with the messes involved. The following 
are the key issues that I see:

Potentially horrid performance

I’ve been on the back end of this all the time. There are several reasons. One 
is that the frameworks generate horrid code to start with, the second is that 
they are typically quite resistant to improvement, the third is that they tend 
to encourage processing with far too much data movement.

I regularly end up in software houses with major issues that they don’t know 
how to solve. As an example, I was at a start-up software house recently. They 
had had a team of 10 developers building an application for the last four 
years. The business owner said that if it would support 1000 concurrent users, 
they would have a viable business. 5000 would make a good business. 500 they 
might survive. They had their first serious performance test two weeks before 
they had to show the investors. It fell over with 9 concurrent users. The 
management (and in this case the devs too) were panic-stricken.

Another recent example was a software house that had to deliver an app to a 
government department. They were already 4 weeks overdue and couldn’t get it 
out of UAT. They wanted a silver bullet. That’s not the point to then be 
discussing their architectural decisions yet they were the issue.

I was in a large financial in Sydney a while back. They were in the middle of 
removing the ORM that they’d chosen out of their app because try as they might, 
they couldn’t get anywhere near the required performance numbers. Why had they 
called me in? Because before they wrote off 8 months’ work for 240 developers, 
the management wanted another opinion.

Just yesterday I was working on a background processing job that processes a 
certain type of share trades in a UK-based financial service organisation. On a 
system with 48 processors, and 1.2 TB of memory, and 7 x 1 million UK pound 
20TB flash drive arrays, it ran for 48 minutes. During that time, it issued 550 
million SQL batches to be processed and almost nothing else would work well on 
the machine at the same time. The replacement job that we wrote in T-SQL issued 
40,000 SQL batches and ran in just over 2 minutes. I think I can get that to 
1/10 of that with further work. Guess which version is likely to get used now?

Minimal savings yet long term pain

Many of the ORMs give you an initial boost to “getting something done”. But at 
what cost? At best, on most projects that I see, it might save 10% of the 
original development time, on the first project. But as David pointed out in 
his excellent TechEd talk with Joel (and as I’ve seen from so many other 
places), the initial development cost of a project is usually only around 10% 
of the overall development cost. So what are we talking about? Perhaps 1% of 
the whole project? Putting yourself into a long-term restrictive straightjacket 
situation for the sake of a 1% saving is a big, big call. The problem is that 
it’s being decided by someone who isn’t looking at the lifetime cost, and often 
90% of that lifetime cost comes out of someone else’s bucket.

Getting stuck in how it works

For years, code generated by tools like Linq to SQL was very poor. And it knew 
it was talking to SQL Server in the first place. Now imagine that you’re 
generating code and you don’t even know what the DB is. That’s where EF 
started. Very poor choices are often made in these tools. The whole reason that 
“optimize for ad hoc workloads” was added to SQL Server was to deal with the 
mess from the plan cache pollution caused by these tools. A simple example is 
that on the SqlCommand object, they called AddWithValue() to add parameters to 
the parameters collection. That’s a really bad idea. It provides the name of 
the parameter and the value, but no data type. So it used to try to derive the 
data type from the data. So SQL Server would end up with a separate query plan 
for every combination of every length of string for every parameter. And what 
could the developers do to fix it? Nothing. Because it was baked into how the 
framework worked. The framework eventually got changed a bit to have more 
generic sizes but still never addressed the actual issues.

Getting stuck with old code

SQL Server 2008 added a bunch of new data types. Spatial was one of my 
favourites. When did that get added to EF? Many, many, many years later. What 
could the developers do about it? Almost nothing except very hacky workarounds. 
When you use a framework like this, you are totally at the mercy of what its 
developers feel is important, and that’s if they haven’t already lost interest 
in it. When you code with a tool that was targeted cross-DB, you usually end up 
with a very poor choice of data types. You end up with lowest common 
denominator of everything, or you end 

RE: SQL Server Collation

2016-06-11 Thread
The SQL collations are only there for backwards compatibility and should have 
stopped being the default many versions ago.

We always teach admins to use the Windows collations (ie: ones without a SQL 
prefix) rather than the SQL collations.

Now that Latin1_General_CI_AS has become the default, I had used it for the new 
samples for 2016 (Wide World Importers). The product group got me to change it 
to Latin1_General_100_CI_AS as they said that’s the one they now prefer. (In 
which case, why isn’t it the default you ask…)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Sunday, 12 June 2016 12:38 PM
To: ozDotNet 
Subject: SQL Server Collation

I'm installing SQL Server 2016 Express for the first time to try it out. It's 
asking me for a default collation and its default is Latin1_General_Cl_AS. Is 
this a good default, or are there better choices? I know it might be a 
complicated argument. I thought I'd check first with the SQL boffins in here 
before I continue as I think the default get baked into the install forever 
(does it?) -- GK


RE: [OT] Looking for work

2016-06-09 Thread
Love it…

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Friday, 10 June 2016 10:45 AM
To: ozDotNet 
Subject: Re: [OT] Looking for work

You could call the app "Tender".

Sounds all kinds of wrong, I know.
Get Outlook for iOS



On Thu, Jun 9, 2016 at 7:56 PM +0800, "Scott Barnes" 
> wrote:
There should be a Tinder / Grinder app for CV's... but instead of looks its 
blind resumes... then you as an employer have to ask a puzzle or specific 
question (you only get like 3)  the answer then gets handed in but you the 
employer need to match it to one of the pool of CV's you've "kept"...if you 
then lock it on the right target they get the job..

It's pretty much the same odds :) hehehe


---
Regards,
Scott Barnes
http://www.riagenic.com

On Thu, Jun 9, 2016 at 8:20 PM, Tony Wright 
> wrote:
I agree with you. There are all sorts of biases that come into play at 
interviews that often have very little to do with getting the right person to 
match the job and whether they can even do the job.

There are interviewers that want someone "like them" that might, perhaps, 
expect your braindump of technical knowledge to match theirs, there's primacy 
bias (comparing every candidate to the first one interviewed), recency bias 
(comparing everyone to the last person interviewed), there's the halo affect 
(one positive answer overshadows all negative answers), the horns affect (one 
negative answer overshadows all positive answers), and there's unconscious 
discrimination.

So there are definitely good people that get passed over during the interview 
process. Unfortunately, if you don't actually know the person or know of the 
person, there's really no other way. Mistakes in the interviewing process are 
often made, even when the person answers all the questions correctly they can 
still be a disaster. And these mistakes can just as easily be made by 
professional recruiters, who often suffer the same biases as everyone else.

The problem is really when you get someone that isn't a match. It is disastrous 
because it is a huge waste of time and money spent finding out the person isn't 
a fit. It is also disastrous because you will now need to find someone else to 
fill that position. And finally, it disastrous to the person themselves because 
they could have missed opportunities where they could have excelled in 
something else where they would have been a better match. So you don't really 
want to hire a person that thinks they're an "expert" that aren't really 
expert. It can make or break a project, and when it's your money on the line...




On Thu, Jun 9, 2016 at 3:39 PM, Scott Barnes 
> wrote:
I'm still stumbling my way through a psychology degree (hah weak attempt at an 
appeal to authority lol) but I'm more and more convinced that "technical 
interviews" are a form of projection less about means testing a persons' 
potential / abilities. Some folks just have extremely poor working memory while 
others have excellent ones but on the whole the ability for them to regurgitate 
the exact location of where logic lies within the .NET framework is really 
moot. Hell, I think i could probably put the .NET program managers themselves 
into the same process and i'd wonder if they would come out unscathed and more 
over what purpose does it really serve?

If someone can memorise the entirety of ASP.NET MVC but fails 
to apply the same logic in say Mono Subset then do they really know .NET or do 
they just know a subset of .NET. What if they could provide coverage on 
everything .NET up and until LINQ or Entity Framework? is that still .NET pass 
or fail? In that they've effectively illustrated they can grasp or comprehend 
the primitives required to progress with .NET but in the end have poor recall 
abilities?

In my interview process what I typically look for the most is appetite for 
puzzles. You're an engineer, you're not meant to walk in with answers you're 
supposed to walk in with enough foundation pieces to find answers, trick with 
interviews is to then test the foundation... its why stupid questions like "Why 
are manhole covers round" are legendary... its an open question that has only 
one true answer (because Ninja Turtles need to get in / out of them) but lends 
itself to creative / critical thinking.

Technical are fine but if they are more targeted at foundation level points 
...ie "inside pseudo code, write the usage of a pointer being passed in out of 
two separate layers and then same thing but a copy instead" - who cares if the 
person writes this in 

Re: [OT] Looking for work

2016-06-07 Thread
Be careful what knowledge you claim to have too. I regularly do technical 
interviews on behalf of clients (3 so far this week already - must be the 
season for it). If a person claims experience or knowledge in an area, I tend 
to drill into it with increasing depth to judge that myself anyway. It's 
obvious when they just put entries in to match a recruiter's search.

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com




On Tue, Jun 7, 2016 at 3:49 PM +1000, "Tom P" 
> wrote:

What do the seniors here look for on a CV? I've been told by a few people I 
should be giving myself a score out of 10 for competency in a particular 
language/technology but I find it quite hard to do that and have it actually 
mean anything.

Thanks
Tom

On 7 June 2016 at 10:22, Greg Keogh 
> wrote:
I had a tough time down there too. Everywhere seemed to want an AngularJS 
"expert" when I was looking.

Oh hell! I'll never work again -- GK



RE: [OT] Looking for work

2016-06-06 Thread
Hey Tom,

Best to give us a clue on what you think your core strengths are. Then anyone 
that knows of something can let you know.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tom P
Sent: Tuesday, 7 June 2016 10:17 AM
To: ozDotNet 
Subject: Re: [OT] Looking for work

No interviews in three months. I'll look into an Angular cert perhaps then.

Thanks
Tom

On 7 June 2016 at 10:15, Bec C 
> wrote:
I had a tough time down there too. Everywhere seemed to want an AngularJS 
"expert" when I was looking. Are you getting interviews or not even reaching 
that stage?

On Tue, Jun 7, 2016 at 10:09 AM, Tom P 
> wrote:
Hi folks,

I've really had a tough time finding work in Melbourne. Getting very desperate 
now. If anyone knows of a junior-intermediate role please send through.

Thanks
Tom




RE: [OT] WinHlp32.exe and .CHM help file format

2016-04-21 Thread
The current trend is towards online doco only, I’m guessing mostly because it’s 
so easy to keep up to date.

The SQL Server team begrudgingly supplies offline doco but it’s not a patch (no 
pun intended) on the online versions. They use their HelpViewer 2 mostly. 
Nothing is really coming in CHM any more.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ian Thomas
Sent: Thursday, 21 April 2016 10:33 PM
To: ozdotnet@ozdotnet.com
Subject: [OT] WinHlp32.exe and .CHM help file format

I know that the .CHM file format has been deprecated / not recommended / not 
supported for 5 or more years, but I found that there is an installer for 
Windows 8/8.1 for its “reader”, Winhlp32.exe �C but not for Windows 10.
MSDN Magazine abandoned the format in 2009 or 2010.
I don’t know if anyone writes help documentation for their Windows applications 
any more �C unless it’s PDF or some form of HTML.
What’s the experience and recommendsations of you folks on the ozdotnet list?
There is a way to coerce the Windows 7 install package for Winhp32.exe to work 
for an install on Windows 10 64-bit (which is quite clever), but I’m intrigued 
to know if Microsoft has any replacement for its various generations of help 
file formats. For a decade or more (15+ years perhaps) it spawned an entire 
industry of third-party help authoring applications., including a couple of 
Australian ones

Ian Thomas
Albert Park, Victoria 3206 Australia



Re: Coffee snobs

2016-04-09 Thread
Awesome thanks all - the Giotto it is

Regards

Greg

Dr Greg Low
SQL Down Under
+61 419201410
1300SQLSQL (1300775775)

On 8 Apr 2016, at 1:10 PM, Andrew Navakas 
> wrote:

Hi,
I have a Breville at one office and it does a good enough job, but it is hard 
to be consistently good with it.
It is built to a price (all plastic & cheap parts), and wont last. When 
something fails, it is unlikely to be fixable, or worth fixing.
A Rancilio Silvia at another location, which is getting more serious, & a 
Gaggia MDF grinder. It is about 13 years old, and has needed a small amount of 
maintenance (a new pump & new boiler once). It is better than the Breville.
I have a Rocket Giotto at home (> 10 yrs old, & hasn’t missed a beat), with a 
commercial grade grinder (it is CRITICAL to have a good grinder) �C much 
better, more consistent, better coffee than most cafes, but many $$$ more.
I have found Chris, the owner of this business:
http://talkcoffee.com.au
to be a font of wisdom & experience. He sells serious machines, and roasts some 
of the best beans I have ever had.
You wont go wrong there.
Also, this site:
http://coffeesnobs.com.au/
is a great resource.
Andrew

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Friday, 8 April 2016 10:19 AM
To: ozDotNet >
Subject: Coffee snobs

Hi Folks,

Given the importance of caffeine for code generation, I’m guessing there will 
be a few other coffee snobs on the list.

Anyone got a recommendation for a serious (possibly manual) coffee machine?

I’ve been using a DeLonghi automatic one but now feeling that I’d prefer 
something like this:

BES890 or BES920: http://www.breville.com.au/beverages/coffee-machines.html

The main thing that puts me off is the Breville brand.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com



RE: Coffee snobs

2016-04-07 Thread
I saw one the other day that featured bluetooth.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Bill McCarthy
Sent: Friday, 8 April 2016 11:29 AM
To: Greg Keogh <gfke...@gmail.com>; ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Coffee snobs

Actually, yes, at least one is, IIRC. You can send a message to it so it pre 
warms etc. Still missing the delivery drone though.

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Greg Keogh<mailto:gfke...@gmail.com>
Sent: Friday, 8 April 2016 11:01 AM
To: ozDotNet<mailto:ozdotnet@ozdotnet.com>
Subject: Re: Coffee snobs

Are these things IoT enabled? -- GK

On 8 April 2016 at 10:56, Bill McCarthy 
<bill.mccarthy.li...@live.com.au<mailto:bill.mccarthy.li...@live.com.au>> wrote:

When my delonghi broke down(leaking water), I started looking at alternative 
machines.  Most of the cheap machines will make a good short single shot. But I 
like long flat white (occasionally a long macchiato) , so it gets hard to find 
a machine that will do that consistently sub $2k.

I use to have a cheap breville years ago, and it was good, but it was only good 
at short drinks, and really was a messy pain compared to an auto machine. Me, 
I’d go auto at the sacrifice of money and quality 



The biggest thing to look out for, is if this is your go to machine during the 
day, then this will be the coffee you get use to. Most people I know that have 
decades of coffee drinking experience in Australia, use to find instant coffee 
acceptable, palatable, and even enjoyable. I was reminded of this the other 
weekend when I went to the timboon miniature train festival, and the only 
coffee was instant: sometimes spoiling ourselves everyday, spoils us.  If I 
didn’t drink ‘reasonable’ coffee all the time, I might be able to enjoy an 
instant, and maybe even a macca’s coffee too 





Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10



From: Greg Low (罗格雷格博士)<mailto:g...@greglow.com>
Sent: Friday, 8 April 2016 10:19 AM
To: ozDotNet<mailto:ozdotnet@ozdotnet.com>
Subject: Coffee snobs



Hi Folks,



Given the importance of caffeine for code generation, I’m guessing there will 
be a few other coffee snobs on the list.



Anyone got a recommendation for a serious (possibly manual) coffee machine?



I’ve been using a DeLonghi automatic one but now feeling that I’d prefer 
something like this:



BES890 or BES920: http://www.breville.com.au/beverages/coffee-machines.html



The main thing that puts me off is the Breville brand.



Regards,



Greg



Dr Greg Low



1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax

SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>





RE: Coffee snobs

2016-04-07 Thread
:)

Yes, I was struggling to buy something with an “Oracle” name on it :)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Friday, 8 April 2016 11:06 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Coffee snobs

Greg, I was sure you would buy the Oracle. ;)
Sent from Outlook<https://aka.ms/kr63o9> on iOS



On Thu, Apr 7, 2016 at 5:19 PM -0700, "Greg Low (罗格雷格博士)" 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Hi Folks,

Given the importance of caffeine for code generation, I’m guessing there will 
be a few other coffee snobs on the list.

Anyone got a recommendation for a serious (possibly manual) coffee machine?

I’ve been using a DeLonghi automatic one but now feeling that I’d prefer 
something like this:

BES890 or BES920: http://www.breville.com.au/beverages/coffee-machines.html

The main thing that puts me off is the Breville brand.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>



RE: Coffee snobs

2016-04-07 Thread
Yep, the DeLonghi has been “ok”. I think it was about $2.7k at the time.

I’m not necessarily after a sub 2k machine. Happy to pay for one that’s worth 
having.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: Bill McCarthy [mailto:bill.mccarthy.li...@live.com.au]
Sent: Friday, 8 April 2016 10:56 AM
To: Greg Low (罗格雷格博士) <g...@greglow.com>; ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Coffee snobs

When my delonghi broke down(leaking water), I started looking at alternative 
machines.  Most of the cheap machines will make a good short single shot. But I 
like long flat white (occasionally a long macchiato) , so it gets hard to find 
a machine that will do that consistently sub $2k.
I use to have a cheap breville years ago, and it was good, but it was only good 
at short drinks, and really was a messy pain compared to an auto machine. Me, 
I’d go auto at the sacrifice of money and quality 

The biggest thing to look out for, is if this is your go to machine during the 
day, then this will be the coffee you get use to. Most people I know that have 
decades of coffee drinking experience in Australia, use to find instant coffee 
acceptable, palatable, and even enjoyable. I was reminded of this the other 
weekend when I went to the timboon miniature train festival, and the only 
coffee was instant: sometimes spoiling ourselves everyday, spoils us.  If I 
didn’t drink ‘reasonable’ coffee all the time, I might be able to enjoy an 
instant, and maybe even a macca’s coffee too 


Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Greg Low (罗格雷格博士)<mailto:g...@greglow.com>
Sent: Friday, 8 April 2016 10:19 AM
To: ozDotNet<mailto:ozdotnet@ozdotnet.com>
Subject: Coffee snobs


Hi Folks,



Given the importance of caffeine for code generation, I’m guessing there will 
be a few other coffee snobs on the list.



Anyone got a recommendation for a serious (possibly manual) coffee machine?



I’ve been using a DeLonghi automatic one but now feeling that I’d prefer 
something like this:



BES890 or BES920: http://www.breville.com.au/beverages/coffee-machines.html



The main thing that puts me off is the Breville brand.



Regards,



Greg



Dr Greg Low



1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax

SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>




Coffee snobs

2016-04-07 Thread
Hi Folks,

Given the importance of caffeine for code generation, I’m guessing there will 
be a few other coffee snobs on the list.

Anyone got a recommendation for a serious (possibly manual) coffee machine?

I’ve been using a DeLonghi automatic one but now feeling that I’d prefer 
something like this:

BES890 or BES920: http://www.breville.com.au/beverages/coffee-machines.html

The main thing that puts me off is the Breville brand.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com



RE: [OT] New laptop

2016-04-05 Thread
Still loving my Dell E7440. Lightweight, and I fitted it with 16GB memory from 
Crucial, 2TB EVO SSD, and an LTE-A internal modem from Sierra Wireless. 
(Currently getting near 100MB/sec down and 45MB/sec up).

Great machine.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Dave Walker
Sent: Wednesday, 6 April 2016 8:34 AM
To: ozDotNet 
Subject: Re: [OT] New laptop


i5 is fine though I struggle a little with only 8 GB of ram. Ssd way more 
important though.

My mac - or the current equivalent - I bought last year seems to have jumped 
$1k which would have made it out of my bracket.
On 6 Apr 2016 09:52, "Tejas Goradia" 
> wrote:

Just wondering, is it worth spending so much money on local computing?

I wonder how many people are now using Amazon or Azure as an option vs buying 
best in the breed local computing power.

Is 8gb, i5 environment not enough?
On 29/10/2015 9:22 AM, "Tom Rutter" 
> wrote:
Geez had a look and it gets expensive real quick

On Thu, Oct 8, 2015 at 12:27 PM, Stephen Price 
> wrote:
I've ordered mine. If the 1Tb version was available (no date on that has been 
released yet) I would have gotten that one. Shut up and take my money!
Have disclosed said pre-order to my significant other, and I've successfully 
trained her to react with a simple eye roll. I'll likely "pay" for it later. :)

It's a great feeling to not have to drool over Apple hardware and be 
embarrassed about being in the Microsoft camp.
It would have been nice for the Surface Book to have come with Usb-C port(s) 
but given I don't actually yet have any devices, and how long it takes for 
devices to spring up, I can live with USB 3.0 for a while more.

Exciting times!

On Thu, 8 Oct 2015 at 08:49 
> wrote:
Yeah - looks pretty nice, 512GB / i7/ 16GB for $4,199 AUD

Compared to similar spec Lenovo ThinkPad W541 (no touch/pen/etc) $3,649.00 AUD 
(though I think the Lenovo is 5th Gen i7 not 6th Gen as in Surface Book?)

Jason Roberts
Journeyman Software Developer

Twitter: @robertsjason
Blog: http://DontCodeTired.com
Pluralsight Courses: http://bit.ly/psjasonroberts

===
I welcome VSRE emails. Learn more at http://vsre.info/
===

From: DotNet Dude
Sent: ‎Wednesday‎, ‎7‎ ‎October‎ ‎2015 ‎5‎:‎18‎ ‎PM
To: ozDotNet

In case anyone here hasn't heard yet and is interested the new Surface Book 
will apparently be available in Oz on Nov 12th. Looks to be expensive though

On Thu, Sep 24, 2015 at 10:10 AM, DotNet Dude 
> wrote:
I also read somewhere next version of Surface Pro likely to be announced oct 6 
so anyone interested may want to wait to see what happens there


On Thursday, 24 September 2015, Ken Schaefer 
> wrote:
Microsoft are going to start offering Signature editions via their new stores 
in Aus (first one opening soon-ish in Sydney). They might offer the same online 
I guess, once the store opens.



From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Tom Rutter
Sent: Wednesday, 23 September 2015 3:57 PM
To: ozDotNet >
Subject: Re: [OT] New laptop

Any things to look out for if I buy direct from US? I've always purchased 
locally

On Fri, Aug 28, 2015 at 2:31 PM, Eddie de Bear (Gmail) 
> wrote:
The Signature Editions are the exact same machines (HP, Lenovo, etc) BUT 
stripped bare of all the crapware.. From what I remember reading when Microsoft 
first started with them, it’s a clean windows install, with all the correct 
tweeks, drivers etc to get the most out of the hardware..

Here is a link to their US store: 
http://www.microsoftstore.com/store/msusa/en_US/cat/categoryID.69916600


From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of mike smith
Sent: Friday, 28 August 2015 2:11 PM
To: ozDotNet >
Subject: Re: [OT] New laptop

On Fri, Aug 28, 2015 at 10:44 AM, Eddie de Bear (Gmail) 
> wrote:
This is where Microsoft could really make a difference, if they would stop 
thinking about just the US and make the Signature Edition laptops/PCs available 

RE: Azure Table query "not null"

2016-03-06 Thread
“It becomes a technical quiz at times to figure out how to use the pair of 
string keys effectively. You can finish up putting all of the row data into the 
keys! One of my live tables has a natural triple compound key, simple in a SQL 
schema, but in the table I made the PartitionKey one value and MD5 hashed the 
other two into an 8 byte hexchars RowKey. It feels odd to do this, but it 
actually works fine.”

Sounds like you might need a database rather than trying to build a lousy one 
out of table storage ☺

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 7 March 2016 10:09 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Azure Table query "not null"

OData may have a "null", but ATS certainly doesn't have the concept, or a 
schema, so I'm sympathetic to the fact that IS NOT NULL will not have any kind 
of direct efficient equivalent. However ... I need some way of finding all the 
rows with an ErrorMessage property. If there is no way of doing this without a 
full table scan, then it hints that I'm using the facility inappropriately, in 
a "relational" way that it doesn't support.

My logs RowKey is full of the timestamp, so unfortunately there's no room left 
to squeeze a flag into it. Which opens up the question of how much information 
you can squeeze into the Partition and Row keys in a readable and searchable 
way. It becomes a technical quiz at times to figure out how to use the pair of 
string keys effectively. You can finish up putting all of the row data into the 
keys!

One of my live tables has a natural triple compound key, simple in a SQL 
schema, but in the table I made the PartitionKey one value and MD5 hashed the 
other two into an 8 byte hexchars RowKey. It feels odd to do this, but it 
actually works fine.

GK

On 6 March 2016 at 20:51, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
If it's NULL, why would you store it at all? Surely the lack of a value is NULL.

In SQL Server when they issue XML, they have an option for a way to represent 
NULL. Normally they just omit the attribute. They normally only do that in case 
someone is trying to derive the schema from the data. In that case, you want 
something to let them know there is a column but it currently has no value.

For a table storage system (like an EAV system), why store anything? (Unless 
again, you're somehow deriving the schema from the data that's present).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com>

-Original Message-
From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Thomas Koster
Sent: Sunday, 6 March 2016 5:27 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: Azure Table query "not null"

On 4 March 2016 at 18:03, Greg Keogh 
<gfke...@gmail.com<mailto:gfke...@gmail.com>> wrote:
> Folks, anyone using Azure Tables Storage in anger? I really like it,
> simple and effective.
>
> What is the query syntax equivalent of SQL "not null", that is, a row
> has a named property? I have a table with tens of thousands of rows,
> but only a small percentage contains a property value named
> ErrorMessage, and I want to select them only. Going ErrorMessage neq
> "" works but it's too ugly to believe there isn't a better way.

OData has a "null" literal, but I don't know if they have it in Azure Tables (I 
have not used it "in anger").

Have you considered including something in the RowKey so that you can 
distinguish these rows from the rest with a range query instead?

--
Thomas Koster



RE: Azure Table query "not null"

2016-03-06 Thread
Hi Greg,

If you’re after simple, what does the Azure SQL DB T-SQL interface make hard? 
You don’t even need any NuGet package to work with it. They set up and run the 
DB.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 7 March 2016 9:46 AM
To: ozDotNet 
Subject: Re: Azure Table query "not null"

Hi Ian et al, I reckon most of the replies in the article are spot on regarding 
price, capacity, speed, transactions, etc (except for the last person who hates 
Table Storage for reasons I don't think are justifiable). You have to know in 
your bones if your data is relational (normalisable) or not, and if it needs 
what SQL server give you: joins, optimized queries, transactions, etc. I'm sure 
anyone worth their hourly rate knows in their bones if you need an RDB or not. 
If you need an RDB then use Azure SQL.

So when is Table Storage a good choice? I think the choice is narrowed greatly 
by the fact you must use two string keys per table with no joins. If you can 
coerce your data to work this way then Tables are great. I use them for log 
files and lookups. I even have a pair of tables with a fake join property, I 
can't update them in a transaction, but updates are rare and it doesn't matter 
in this case.

Using the key pair makes me do artificial things at times. For example, my log 
files have the PartitionKey as the machine name and I make a RowKey like this:

MMddHHmmssfff-nnn

The nnn is a cyclic counter suffix to allow up to 1000 inserts in the same time 
interval. Doing this seems cludgy, but it's what the string keys demand of you.

For me, the greatest things about table and blob storage is the ease-of-use. 
You just add a Nuget package and in a dozen simple lines of managed code you 
have it working. If I were to use Couch, Mongo, Raven or any other such DB I'd 
be stuffing around with unfamiliar runtimes, hosts, dependencies, installations 
and documentation. As a coder, I want things to "just work" without hours of 
suffering and learning, and the Azure storage APIs are simple and productive.

GK

On 6 March 2016 at 18:45, Ian Thomas 
> wrote:

I wondered about Azure SQL vs Azure Table Storage pros and cons, and to lessen 
my ignorance looked at a few Q at Stackoverflow.

This part of a response (5 years to 2 years  old, so the balance may have 
changed considerably) is one person’s opinion, but I’d be interested in Greg 
Low‘s comments on it:



When should i use Sql Azure and when should I use table 
Storage?

this is an excellent question and one of the tougher and harder to reverse 
decisions that solution architects have to make when designing for Azure.

There are mutliple dimensions to consider: On the negative side, SQL Azure is 
relatively expensive for gigabyte of storage, does not scale super well and is 
limited to 150gigs/database, however, and this is very important, there are no 
transaction fees against SQL azure and your developers already know how to code 
against it.

ATS is a different animal all together. Capeable of megascalability, it is dirt 
cheap to store, but gets expensive to frequently access. It also requires 
significant amount of CPU power from your nodes to manipulate. It baiscally 
forces your compute nodes to become mini-db servers as the delegation of all 
relational activity is turned over to them.

So, in my opinion, frequently accessed data that does not need huge scalability 
and is not super large in size should be destined for SQL Azure, otherwise 
Azure Table Services.

Your specific example, transactional data from financial transactions is a 
perfect place for ATS, while meta information (account profiles, names, 
addresses, etc.) Is perfect for SQL azure.



All the other answers to the NULL question that I have seen (for table storage) 
have some sort of “clumsy” testing, along the lines that GK has used. There are 
several lnks (elsewhere on SO – see the side-panel links to other questions on 
null testing) some of which lead to Microsoft guides, which may be helpful.



Ian Thomas

Albert Park, Victoria 3206 Australia



-Original Message-
From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Thomas Koster
Sent: Sunday, 6 March 2016 5:27 PM
To: ozDotNet >
Subject: Re: Azure Table query "not null"



On 4 March 2016 at 18:03, Greg Keogh 
> wrote:

> Folks, anyone using Azure Tables Storage in anger? I really 

RE: Azure Table query "not null"

2016-03-06 Thread
Hi Ian,

It’s a tough question.

For me, the biggest issue is where the querying really will happen. If 100% of 
the querying will happen in the application, and every object will just be 
persisted to the table storage and rehydrated before querying, then a table 
might suit. That usually leads to very poorly performing applications though, 
depending upon what you need to achieve. I see this type of app regularly.

For example, if I have a stock system and I need to clear the “quantity at last 
stocktake” value for every stock item, I could rehydrate every single stock 
item back to some middle tier, change the value, and then write them all back. 
Might feel clean and pure to some, but will take forever. There’s a big 
difference when you just say to the DB: “make every value in that column NULL”.

It's also worth considering that when you store the attributes of an object in 
a table store (key value pair store), instead of reading/writing one row, 
suddenly you might be reading and writing dozens of rows (or table values) to 
get the same outcome.

Potential scale really isn’t an issue for most of these types of DBs now. The 
current limit is 500GB and that’s going to increase substantially as well.  
Most of the Azure SQL DB storage is now on SSDs internally and most of the 
“normal” storage accounts aren’t. There could be a substantial performance 
difference.

Transaction support is another big issue. I was fascinated to see the comment 
that decided that table storage is a good target for financial transactions. 
Glad he’s not architecting systems that I work on. Azure table storage has a 
basic transaction concept but it won’t for example, even deal with you deleting 
a row and putting another one back in. At least it didn’t last time I checked 
it out. It had very, very limited concepts of transactions.

Table storage is cheaper than DB storage. No argument. But that two are hardly 
comparable in any way. With table storage, it’s more like you’re building your 
own clumsy DB. Personally, I’d rather pay for one and use it. It also means you 
lose the ability to use all the related tooling, much of which can help a great 
deal.

The way I see it is that for most organisations, the data is the most valuable 
thing they own. It usually outlives generations of applications and just gets 
slightly morphed into different shapes over time. In most organisations, it’s 
also accessed by a number of applications, not just one. Designing your data 
storage for the needs of one version of one app that you need today is a big 
call, and usually not a very clever one.

Would I use table storage for anything? Sure, I can think of scenarios where I 
might. But it wouldn’t involve financial transactions. And most of the cases 
where I might have used it, are more related to storing data that doesn’t fit 
neatly into a relational model. That means that it probably doesn’t work that 
well for table storage either. In that case, I might well now use a dedicated 
JSON store like DocumentDB instead.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ian Thomas
Sent: Sunday, 6 March 2016 6:45 PM
To: 'ozDotNet' 
Subject: RE: Azure Table query "not null"


I wondered about Azure SQL vs Azure Table Storage pros and cons, and to lessen 
my ignorance looked at a few Q at Stackoverflow.

This part of a response (5 years to 2 years  old, so the balance may have 
changed considerably) is one person’s opinion, but I’d be interested in Greg 
Low‘s comments on it:



When should i use Sql Azure and when should I use table 
Storage?

this is an excellent question and one of the tougher and harder to reverse 
decisions that solution architects have to make when designing for Azure.

There are mutliple dimensions to consider: On the negative side, SQL Azure is 
relatively expensive for gigabyte of storage, does not scale super well and is 
limited to 150gigs/database, however, and this is very important, there are no 
transaction fees against SQL azure and your developers already know how to code 
against it.

ATS is a different animal all together. Capeable of megascalability, it is dirt 
cheap to store, but gets expensive to frequently access. It also requires 
significant amount of CPU power from your nodes to manipulate. It baiscally 
forces your compute nodes to become mini-db servers as the delegation of all 
relational activity is turned over to them.

So, in my opinion, frequently accessed data that does not need huge scalability 
and is not super large in size should be destined for SQL Azure, otherwise 
Azure Table Services.

Your specific example, transactional data from 

RE: Azure Table query "not null"

2016-03-06 Thread
If it's NULL, why would you store it at all? Surely the lack of a value is NULL.

In SQL Server when they issue XML, they have an option for a way to represent 
NULL. Normally they just omit the attribute. They normally only do that in case 
someone is trying to derive the schema from the data. In that case, you want 
something to let them know there is a column but it currently has no value.

For a table storage system (like an EAV system), why store anything? (Unless 
again, you're somehow deriving the schema from the data that's present).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax 
SQL Down Under | Web: www.sqldownunder.com

-Original Message-
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Thomas Koster
Sent: Sunday, 6 March 2016 5:27 PM
To: ozDotNet 
Subject: Re: Azure Table query "not null"

On 4 March 2016 at 18:03, Greg Keogh  wrote:
> Folks, anyone using Azure Tables Storage in anger? I really like it, 
> simple and effective.
>
> What is the query syntax equivalent of SQL "not null", that is, a row 
> has a named property? I have a table with tens of thousands of rows, 
> but only a small percentage contains a property value named 
> ErrorMessage, and I want to select them only. Going ErrorMessage neq 
> "" works but it's too ugly to believe there isn't a better way.

OData has a "null" literal, but I don't know if they have it in Azure Tables (I 
have not used it "in anger").

Have you considered including something in the RowKey so that you can 
distinguish these rows from the rest with a range query instead?

--
Thomas Koster


Re: SQL Server 2014 Books Online - bits missing?

2016-03-04 Thread
Yes they know that's an issue.

Regards

Greg

Dr Greg Low
SQL Down Under
+61 419201410
1300SQLSQL (1300775775)

On 4 Mar 2016, at 9:38 PM, noonie 
<neale.n...@gmail.com<mailto:neale.n...@gmail.com>> wrote:

Yeah If your workplace lets yo go online from the boxes you need to read 
BOL from ;-)

--
noonie


On 4 March 2016 at 15:10, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Yep, they had an issue.

Worth noting though, that there’s a lot of discussion about the utility of 
keeping to produce offline documentation. And the online versions are now far 
superior.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Ken Schaefer
Sent: Friday, 4 March 2016 2:47 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: SQL Server 2014 Books Online - bits missing?

I think I’ve got it fixed �C the missing content is showing up after installing 
SP1 + CU5, and re-downloading all the help files.

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Ken Schaefer
Sent: Friday, 4 March 2016 12:40 PM
To: ozDotNet (ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>) 
<ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: SQL Server 2014 Books Online - bits missing?

Hi,

Anyone encountered this before? I’ve just tried to install SQL Server 2014 
books online locally. I selected the 4 available books in Help Viewer, and it 
downloaded around 60MB of stuff. But there’s vast swathes of stuff missing 
(e.g. isn’t there supposed to be a section on Data Manipulation Language in the 
language reference? There’s nothing on SELECT, INSERT etc.

If someone’s run into this before, how did you get the missing content 
installed? Thanks in advance





RE: SQL Server 2014 Books Online - bits missing?

2016-03-03 Thread
Yep, they had an issue.

Worth noting though, that there’s a lot of discussion about the utility of 
keeping to produce offline documentation. And the online versions are now far 
superior.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Friday, 4 March 2016 2:47 PM
To: ozDotNet 
Subject: RE: SQL Server 2014 Books Online - bits missing?

I think I’ve got it fixed �C the missing content is showing up after installing 
SP1 + CU5, and re-downloading all the help files.

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Ken Schaefer
Sent: Friday, 4 March 2016 12:40 PM
To: ozDotNet (ozdotnet@ozdotnet.com) 
>
Subject: SQL Server 2014 Books Online - bits missing?

Hi,

Anyone encountered this before? I’ve just tried to install SQL Server 2014 
books online locally. I selected the 4 available books in Help Viewer, and it 
downloaded around 60MB of stuff. But there’s vast swathes of stuff missing 
(e.g. isn’t there supposed to be a section on Data Manipulation Language in the 
language reference? There’s nothing on SELECT, INSERT etc.

If someone’s run into this before, how did you get the missing content 
installed? Thanks in advance

[cid:image001.png@01D17627.EAC16500]


RE: [OT] ACS - relevant?

2016-02-29 Thread
But that’s my point. Agreed, it’s not necessarily anything to do with whether 
the project fails. We know that.

It’s the backside protection that is improved by the external certification, 
not necessarily the project outcome.

That said, I do see a large number of projects that have in fact failed (or are 
perilously close to failing) through basic incompetence.

It is a problem in our industry whether we want to face it or not. It’s quite 
tiring to endlessly try to rectify the same sorts of basic problems.

I really love work where it’s “how should we tackle this development?” rather 
than “OMG, we’re in such a mess. What do we do next?”, when the panic sets in. 
The more this happens, the more likely that some form of regulation might 
occur, at least for sections of the industry.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Ken Schaefer
Sent: Tuesday, 1 March 2016 12:52 PM
To: ozDotNet 
Subject: RE: [OT] ACS - relevant?

Do many IT projects fail because of the lack of externally certified 
competency? I’m not sure they do.

I’ve seen projects fail because requirements were uncertain (or changed), or 
scope changed, or complexity was underestimated, or best effort “guesses” based 
on incomplete information at the time ended up being the wrong punt.

Very few of these are “IT” problems – they are problems that come from the 
business, or in governance, and some are just plain bad luck.

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 1 March 2016 11:02 AM
To: ozDotNet >
Subject: RE: [OT] ACS - relevant?

Almost agree Ken. I don’t see having “professional” attributes as being related 
to whether or not IT projects fail. What I do see is a difference in the finger 
pointing when they fail.

If I was the CEO responsible when an issue occurred, I’d feel more comfortable 
having used staff that an external body says are professional, rather than ones 
I assessed myself to be great at what they do. It avoids me being stuck with 
having to try to argue basic competence.

And yes, point taken about common parlance. I have a friend who is a 
wheelbarrow mechanic and many who are sales engineers.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Ken Schaefer
Sent: Tuesday, 1 March 2016 10:46 AM
To: ozDotNet >
Subject: RE: [OT] ACS - relevant?

Whilst you are right that Tony is conflating professionalism with desirable 
employee attributes, I think you’re also conflating professionalism with 
“avoidance of high failure rates in IT projects” – there are many 
“professional” endeavours that have failures (whether it be accounting issues 
through to scientific experiments) which having a profession wouldn’t suddenly 
mitigate: a lot of IT works a commercial sphere where “good enough” is the 
goal. There’s plenty of other IT (utilities, aerospace, defence) where BAU 
failure is not tolerated. Certainly projects may go “over budget”, but that 
happens in civil engineering, legal disputes and many other “professional” 
activities as well.

And lastly, I think, in common parlance, “professional” and “white collar” have 
become conflated. Most people in the community would call marketing/advertising 
people, or human resources people, or vendor/contract management people, or 
people who work in finance to be “professionals”, whereas by the formal 
definition, they’re not.

Cheers
Ken

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Monday, 29 February 2016 10:05 PM
To: ozDotNet >
Subject: RE: [OT] ACS - relevant?

I follow what you’re saying Tony but the two concepts are separate.

You are describing what you are looking for in an employee. You might consider 
that “professionalism” but you are not actually describing what most other 
industries would describe as professionalism. In most industries, 
professionalism is about a formal agreement to adhere to a code of ethics, 
being qualified in the first place, maintaining appropriate certifications, 
carrying out ongoing learning, etc. And, more importantly, ejection from the 
profession if you don’t do what’s required.

It’s just that the IT industry places more value on a perceived ability to get 
something 

RE: Azure File Storage

2016-02-28 Thread
Yep, can’t say I like the new Calculator. It does have options now for 
converting various types of units though.

I saw a menu item that said “Settings” and I wondered what was in there. It 
didn’t have any settings. It was an About box.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 29 February 2016 12:13 PM
To: ozDotNet 
Subject: Re: Azure File Storage

Did you try the “Connect to Network Drive” option in Windows Explorer in 
Windows 10 rather than just NET USE ?

Sadly, the dialog says "Error code: 0x80070036 The network path was not found".

GK

P.S. I just ran calc.exe on Windows 10 for the first time ever, and the new 
look is abominable. It seems to do more, and it's resizable, but it doesn't 
look like a calculator anymore and has gone all flat and bland like a modern 
app. Yeech!


RE: Azure File Storage

2016-02-28 Thread
Hey Greg,

Did you try the “Connect to Network Drive” option in Windows Explorer in 
Windows 10 rather than just NET USE ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 29 February 2016 11:13 AM
To: ozDotNet 
Subject: Re: Azure File Storage

I'm googletarded, but I did find this way down a page:

Are Azure File shares visible publicly over the Internet, or are they only 
reachable from Azure?

As long as port 445 (TCP Outbound) is open and your client supports the SMB 3.0 
protocol (e.g., Windows 8 or Windows Server 2012), your file share is available 
via the Internet.



Sounds great! However, my net use command gives error 53 from my home Win10 
machine with no firewall active, and I get error 5 (access denied) from a 
Win2008R2 Azure VM that I just created. Snookered from both directions! The 
CloudBerry product is interesting, but I want to avoid foreign software 
whenever possible.



GK

On 29 February 2016 at 11:05, David Burstin 
> wrote:
Hi Greg,

A quick search seems to indicate that 
http://www.cloudberrylab.com/windows-azure-cloud-drive-desktop-software.aspx 
can deal with this, for $30 per machine. I haven't found anything else that 
does, but your Google-fu may be better than mine.

Cheers
Dave

On 29 February 2016 at 10:53, Greg Keogh 
> wrote:
Folks, I just read about Azure File Storage and I got all excited at the 
possibility of creating a familiar mapped drive letter to the cloud. In the 
portal I created the File share, uploaded some files and issued a net use 
command which failed with error 53. It turns out I didn't read the fine print 
in my excitement, which says:

To connect to this file share, run this command from any Windows virtual 
machine on the same subscription and location

So you can't just mount it from your home PC, only from a suitable VM. I 
suppose there are security and performance issues, but it's a damn shame as we 
could really have put this to good use as a kind of "giant file share" for apps 
written in C++ with no code changes.

I just want to be sure that this is all true and there's not some less 
restrictive way of using Azure File Storage. Is there some other magic way of 
having a file share in the cloud from anywhere?

Greg K




Re: [OT] Shrinking LDF file

2016-02-09 Thread
To accommodate a transaction of a certain size, it had to grow. That's normal. 
It doesn't autoshrink back down as doing so is generally a really bad idea. 
There is an autoshrink option but I usually joke that it should be renamed 
"auto fragment my filesystem". You don't want files constantly growing and 
shrinking. Empty space in the DB is your friend.

Regards

Greg

Dr Greg Low
SQL Down Under
+61 419201410
1300SQLSQL (1300775775)

On 9 Feb 2016, at 6:53 PM, Greg Keogh 
> wrote:

I found it! ...

DBCC SHRINKFILE(mydatabase_log, 1)

This reduced the 140MB to 3MB, not quite as good as I hoped, but much better. I 
don't yet know what this command actually does. The advice in dozens of top 
search web articles is really verbose and misleading

Why on earth is the LDF so big anyway in simple mode?! Once a transaction is 
complete, isn't that it and it's forgotten? I know log/journal files are 
required for historical recovery, but how than that much space be used for 
anything useful?

 -- GK

On 9 February 2016 at 18:43, Greg Keogh 
> wrote:
Folks, I have been trying for 40 minutes to shrink the 140MB LDF file paired 
with its 15MB MDF file. I have been searching and searching and reading, but 
all the commands and suggestions I've found DON'T WORK. This is a test database 
and there is nothing in the LDF of interest to me. I want to deploy it to Azure 
for more testing, but I want to remove the 140MB of log garbage first.

The DB has recovery mode set to simple like everyone suggests. Does anyone 
actually, really know how to shrink the log back to as small as possible? Is it 
even possible? Do I misunderstand how this works?!

Greg K



RE: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread
We have some E7440’s that have been excellent but are now starting to have some 
screen separation near the hinges. (Bit surprising really). Otherwise, love 
them. So, was thinking about the E7470’s but now thinking XPS 15’s. My eyesight 
would appreciate the 15 inch screen, and the narrow bezel makes it not much 
larger than the 14 inch units.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 8:06 AM
To: 'ozDotNet' 
Subject: RE: Any opinions on the Dell XPS 15 laptops?

I was actually looking at picking one up. I really want a surface pro 4 or 
surface book but the firmware problems, and mostly the exhorbitant price, turn 
me away. In addition, the speed at which older models of surface (namely 2 and 
3) are simply ditched and no longer made (ie. peripherals/replacements soon dry 
up) as soon as new models arrive means the life of these units is pretty small.

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed with 
16Gb of mem, great screen, touch, and good proc. Haven’t played with one 
personally though.


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet >
Subject: Any opinions on the Dell XPS 15 laptops?

? As per subject ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com



RE: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread
Nah, sadly the 8GB RAM is a showstopper. With the precision, I’m tossing up 
between 32GB and 64GB. I get by at present with 16GB but wish there was more.

Are the drives upgradable on those? One of the things I love about my current 
E7440 is that drives, memory, etc. are all upgradable. So many laptops now have 
fixed options.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Connors
Sent: Wednesday, 10 February 2016 3:54 PM
To: ozDotNet 
Subject: Re: Any opinions on the Dell XPS 15 laptops?

Howdy,

If you can tolerate the 8GB of RAM limitation, I love my Lenovo X1 Carbon. 
Battery life, size, etc are all superb.

David.


On Wed, 10 Feb 2016 at 14:40 Paul Glavich 
> wrote:
>> starting to have some screen separation near the hinges

My current laptop has huge separation of the plastic that connects to the 
hinges and screen. So much so I can pretty much poke a finger into it when 
opening the lid and seeing visible electronic componentry shift around. It is 
currently bound together with masking tape which helps, hence my need for a new 
lappy ☺


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Wednesday, 10 February 2016 8:12 AM

To: ozDotNet >
Subject: RE: Any opinions on the Dell XPS 15 laptops?

We have some E7440’s that have been excellent but are now starting to have some 
screen separation near the hinges. (Bit surprising really). Otherwise, love 
them. So, was thinking about the E7470’s but now thinking XPS 15’s. My eyesight 
would appreciate the 15 inch screen, and the narrow bezel makes it not much 
larger than the 14 inch units.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 8:06 AM
To: 'ozDotNet' >
Subject: RE: Any opinions on the Dell XPS 15 laptops?

I was actually looking at picking one up. I really want a surface pro 4 or 
surface book but the firmware problems, and mostly the exhorbitant price, turn 
me away. In addition, the speed at which older models of surface (namely 2 and 
3) are simply ditched and no longer made (ie. peripherals/replacements soon dry 
up) as soon as new models arrive means the life of these units is pretty small.

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed with 
16Gb of mem, great screen, touch, and good proc. Haven’t played with one 
personally though.


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet >
Subject: Any opinions on the Dell XPS 15 laptops?

? As per subject ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

--
David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 
417 189 363


RE: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread
Yep, not sure I want a keyboard like the XPS 15 one. It’s a bunch of flat top 
keys with little travel.

I’m now wondering about the new Precision 15 7000. Keyboard looks normal but no 
pricing on the site.

I still also might try to get away with an E7470.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 3:40 PM
To: 'ozDotNet' 
Subject: RE: Any opinions on the Dell XPS 15 laptops?

>> starting to have some screen separation near the hinges

My current laptop has huge separation of the plastic that connects to the 
hinges and screen. So much so I can pretty much poke a finger into it when 
opening the lid and seeing visible electronic componentry shift around. It is 
currently bound together with masking tape which helps, hence my need for a new 
lappy :)


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Wednesday, 10 February 2016 8:12 AM
To: ozDotNet >
Subject: RE: Any opinions on the Dell XPS 15 laptops?

We have some E7440’s that have been excellent but are now starting to have some 
screen separation near the hinges. (Bit surprising really). Otherwise, love 
them. So, was thinking about the E7470’s but now thinking XPS 15’s. My eyesight 
would appreciate the 15 inch screen, and the narrow bezel makes it not much 
larger than the 14 inch units.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 8:06 AM
To: 'ozDotNet' >
Subject: RE: Any opinions on the Dell XPS 15 laptops?

I was actually looking at picking one up. I really want a surface pro 4 or 
surface book but the firmware problems, and mostly the exhorbitant price, turn 
me away. In addition, the speed at which older models of surface (namely 2 and 
3) are simply ditched and no longer made (ie. peripherals/replacements soon dry 
up) as soon as new models arrive means the life of these units is pretty small.

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed with 
16Gb of mem, great screen, touch, and good proc. Haven’t played with one 
personally though.


-  Glav

From: ozdotnet-boun...@ozdotnet.com 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet >
Subject: Any opinions on the Dell XPS 15 laptops?

? As per subject ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com



Re: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread
VMs, tabular data models, etc etc

For example often run a cluster with DC and 3 SQL nodes. Wish I could always do 
that in the cloud but connectivity often sucks in Oz...

Regards

Greg

Dr Greg Low
SQL Down Under
+61 419201410
1300SQLSQL (1300775775)

On 10 Feb 2016, at 5:55 PM, Tom Rutter 
<therut...@gmail.com<mailto:therut...@gmail.com>> wrote:

Only "get by" with 16gb? What do you do with these machines lol

On Wednesday, 10 February 2016, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
Nah, sadly the 8GB RAM is a showstopper. With the precision, I’m tossing up 
between 32GB and 64GB. I get by at present with 16GB but wish there was more.

Are the drives upgradable on those? One of the things I love about my current 
E7440 is that drives, memory, etc. are all upgradable. So many laptops now have 
fixed options.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: 
ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>
 
[mailto:ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>]
 On Behalf Of David Connors
Sent: Wednesday, 10 February 2016 3:54 PM
To: ozDotNet 
<ozdotnet@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');>>
Subject: Re: Any opinions on the Dell XPS 15 laptops?

Howdy,

If you can tolerate the 8GB of RAM limitation, I love my Lenovo X1 Carbon. 
Battery life, size, etc are all superb.

David.


On Wed, 10 Feb 2016 at 14:40 Paul Glavich 
<subscripti...@theglavs.com<javascript:_e(%7B%7D,'cvml','subscripti...@theglavs.com');>>
 wrote:
>> starting to have some screen separation near the hinges

My current laptop has huge separation of the plastic that connects to the 
hinges and screen. So much so I can pretty much poke a finger into it when 
opening the lid and seeing visible electronic componentry shift around. It is 
currently bound together with masking tape which helps, hence my need for a new 
lappy :)


-  Glav

From: 
ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>
 
[mailto:ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>]
 On Behalf Of Greg Low (??)
Sent: Wednesday, 10 February 2016 8:12 AM

To: ozDotNet 
<ozdotnet@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');>>
Subject: RE: Any opinions on the Dell XPS 15 laptops?

We have some E7440’s that have been excellent but are now starting to have some 
screen separation near the hinges. (Bit surprising really). Otherwise, love 
them. So, was thinking about the E7470’s but now thinking XPS 15’s. My eyesight 
would appreciate the 15 inch screen, and the narrow bezel makes it not much 
larger than the 14 inch units.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: 
ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>
 
[mailto:ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>]
 On Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 8:06 AM
To: 'ozDotNet' 
<ozdotnet@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');>>
Subject: RE: Any opinions on the Dell XPS 15 laptops?

I was actually looking at picking one up. I really want a surface pro 4 or 
surface book but the firmware problems, and mostly the exhorbitant price, turn 
me away. In addition, the speed at which older models of surface (namely 2 and 
3) are simply ditched and no longer made (ie. peripherals/replacements soon dry 
up) as soon as new models arrive means the life of these units is pretty small.

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed with 
16Gb of mem, great screen, touch, and good proc. Haven’t played with one 
personally though.


-  Glav

From: 
ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>
 
[mailto:ozdotnet-boun...@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>]
 On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet 
<ozdotnet@ozdotnet.com<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');>>
Subject: Any opinions on the Dell XPS 15 laptops?

? As per subject ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

--
David Connors
da...@connors.com<javascript:_e(%7B%7D,'cvml','da...@connors.com');> | 
@davidconnors | LinkedIn | +61 417 189 363


RE: SQL foreign key question

2016-02-08 Thread
You often see tables referring to themselves (due to some sort of hierarchical 
data) but having a column refer to itself seems at best pointless, at worst 
silly.

And reason #3723923 why I won’t ever use such a visual table designer.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony McGee
Sent: Tuesday, 9 February 2016 3:59 PM
To: ozDotNet 
Subject: Re: SQL foreign key question


This can sometimes happen to a primary key if you use the visual query designer 
to create a FK relationship and don't change the defaults before clicking OK.

It looks like it's ignored in the insert query execution plan.
On 9 Feb 2016 2:32 pm, "David Burstin" 
> wrote:
I came across this (snipped to protect the innocent):

CREATE TABLE [dbo].[V2_BREC_NMIStatusHistory] (
[NMIStatusHistoryId] INT   IDENTITY (1, 1) NOT FOR REPLICATION NOT 
NULL,

CONSTRAINT [PK_V2_BREC_NMIStatusHistory] PRIMARY KEY CLUSTERED 
([NMIStatusHistoryId] ASC),
CONSTRAINT [FK_V2_BREC_NMIStatusHistory_V2_BREC_NMIStatusHistory] FOREIGN 
KEY ([NMIStatusHistoryId]) REFERENCES [dbo].[V2_BREC_NMIStatusHistory] 
([NMIStatusHistoryId])
);


Notice that the primary key identity field has a foreign key constraint on 
itself. How does this work?

I would have thought that any attempt to add a record would check the table for 
the existence of the new key, and as it obviously wouldn’t exist yet, that 
would break the foreign key constraint resulting in the record not being 
written. But, the table has plenty of data.

Anyone have any ideas how this actually works, or does it just do nothing?


RE: SQL foreign key question

2016-02-08 Thread
Yes, if it just points to itself, it already knows that it satisfies it. It is 
respecting the constraint. There is nothing to check.

I see dumb stuff like this all the time.

Was at a site recently where they had triple-created all the FK constraints. No 
big surprise that it doesn’t actually check the value 3 times (to be sure, to 
be sure, to be sure), regardless of what it shows in the plan.

It’s also one of the reasons why I always name constraints. Then you can’t 
accidentally multiple-create them. (Reason #3723924 for not using visual table 
designers).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of David Burstin
Sent: Tuesday, 9 February 2016 4:06 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: SQL foreign key question

On 9 February 2016 at 16:02, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
You often see tables referring to themselves (due to some sort of hierarchical 
data) but having a column refer to itself seems at best pointless, at worst 
silly.

For me the bigger question is how this actually works. Does the sql engine just 
decide to ignore the constraint because it is so obviously dumb? If so, am I 
the only one bothered by that? I would have actually preferred if no records 
could be added to the table, because at least then I know that the engine is 
respecting my constraints.










And reason #3723923 why I won’t ever use such a visual table designer.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Tony McGee
Sent: Tuesday, 9 February 2016 3:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: SQL foreign key question


This can sometimes happen to a primary key if you use the visual query designer 
to create a FK relationship and don't change the defaults before clicking OK.

It looks like it's ignored in the insert query execution plan.
On 9 Feb 2016 2:32 pm, "David Burstin" 
<david.burs...@gmail.com<mailto:david.burs...@gmail.com>> wrote:
I came across this (snipped to protect the innocent):

CREATE TABLE [dbo].[V2_BREC_NMIStatusHistory] (
[NMIStatusHistoryId] INT   IDENTITY (1, 1) NOT FOR REPLICATION NOT 
NULL,

CONSTRAINT [PK_V2_BREC_NMIStatusHistory] PRIMARY KEY CLUSTERED 
([NMIStatusHistoryId] ASC),
CONSTRAINT [FK_V2_BREC_NMIStatusHistory_V2_BREC_NMIStatusHistory] FOREIGN 
KEY ([NMIStatusHistoryId]) REFERENCES [dbo].[V2_BREC_NMIStatusHistory] 
([NMIStatusHistoryId])
);


Notice that the primary key identity field has a foreign key constraint on 
itself. How does this work?

I would have thought that any attempt to add a record would check the table for 
the existence of the new key, and as it obviously wouldn’t exist yet, that 
would break the foreign key constraint resulting in the record not being 
written. But, the table has plenty of data.

Anyone have any ideas how this actually works, or does it just do nothing?



RE: SQL foreign key question

2016-02-08 Thread
Ahh. Should have been:

WHERE A <= 10
AND A >= 10



Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 4:16 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: SQL foreign key question

SQL Server also pre-processes all sorts of things.

For example:

WHERE A <= 10
AND >= 10

That can safely be replaced by:

WHERE A = 10

But they are very cautious about rewriting your code. Very easy to break it. 
But they really have to. While humans can write lousy code, code generators are 
spectacular at doing so. Here’s an example:


WHERE
b.entry >= ISNULL(NULL,'1/1/1900')
AND b.entry < DATEADD(DAY,1,ISNULL(NULL,'1/1/3000')
)
AND --Filter on Facility
(
(NULL IS NOT NULL AND pv.FacilityID IN (NULL)) OR
(NULL IS NULL)
)
AND --Filter on Company
(
(NULL IS NOT NULL AND pv.CompanyID IN (NULL)) OR
(NULL IS NULL)
)
AND --Filter on Financial Class
(
(NULL IS NOT NULL AND pv.FinancialClassMID IN (NULL)) OR
(NULL IS NULL)
)

Etc. etc. etc.  (Not to mention their Y3K problem)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 4:12 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: SQL foreign key question

Yes, if it just points to itself, it already knows that it satisfies it. It is 
respecting the constraint. There is nothing to check.

I see dumb stuff like this all the time.

Was at a site recently where they had triple-created all the FK constraints. No 
big surprise that it doesn’t actually check the value 3 times (to be sure, to 
be sure, to be sure), regardless of what it shows in the plan.

It’s also one of the reasons why I always name constraints. Then you can’t 
accidentally multiple-create them. (Reason #3723924 for not using visual table 
designers).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Burstin
Sent: Tuesday, 9 February 2016 4:06 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: SQL foreign key question

On 9 February 2016 at 16:02, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
You often see tables referring to themselves (due to some sort of hierarchical 
data) but having a column refer to itself seems at best pointless, at worst 
silly.

For me the bigger question is how this actually works. Does the sql engine just 
decide to ignore the constraint because it is so obviously dumb? If so, am I 
the only one bothered by that? I would have actually preferred if no records 
could be added to the table, because at least then I know that the engine is 
respecting my constraints.










And reason #3723923 why I won’t ever use such a visual table designer.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Tony McGee
Sent: Tuesday, 9 February 2016 3:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: SQL foreign key question


This can sometimes happen to a primary key if you use the visual query designer 
to create a FK relationship and don't change the defaults before clicking OK.

It looks like it's ignored in the insert query execution plan.
On 9 Feb 2016 2:32 pm, "David Burstin" 
<david.burs...@gmail.com<mailto:david.burs...@gmail.com>> wrote:
I came across this (snipped to protect the innocent):

CREATE TABLE [dbo].[V2_BREC_NMIStatusHistory] (
[NMIStatusHistoryId] INT   IDENTITY (1, 1) NOT FOR REPLICATION NOT 
NULL,

CONSTRAINT [PK_V2_BREC_NMIStatusHistory] PRIMARY KEY CLUSTERED 
([NMIStatusHistoryId] ASC),
CONSTRAINT [FK_V2_BREC_NMIStatusHistory_V2_BREC_NMIStatusHistory] FOREIGN 
KEY ([NMIStatusHistoryId]) REFERENCES [dbo].[V2_BREC_NMIStatusHistory] 
([NMISta

RE: SQL foreign key question

2016-02-08 Thread
SQL Server also pre-processes all sorts of things.

For example:

WHERE A <= 10
AND >= 10

That can safely be replaced by:

WHERE A = 10

But they are very cautious about rewriting your code. Very easy to break it. 
But they really have to. While humans can write lousy code, code generators are 
spectacular at doing so. Here’s an example:


WHERE
b.entry >= ISNULL(NULL,'1/1/1900')
AND b.entry < DATEADD(DAY,1,ISNULL(NULL,'1/1/3000')
)
AND --Filter on Facility
(
(NULL IS NOT NULL AND pv.FacilityID IN (NULL)) OR
(NULL IS NULL)
)
AND --Filter on Company
(
(NULL IS NOT NULL AND pv.CompanyID IN (NULL)) OR
(NULL IS NULL)
)
AND --Filter on Financial Class
(
(NULL IS NOT NULL AND pv.FinancialClassMID IN (NULL)) OR
(NULL IS NULL)
)

Etc. etc. etc.  (Not to mention their Y3K problem)

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 4:12 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: SQL foreign key question

Yes, if it just points to itself, it already knows that it satisfies it. It is 
respecting the constraint. There is nothing to check.

I see dumb stuff like this all the time.

Was at a site recently where they had triple-created all the FK constraints. No 
big surprise that it doesn’t actually check the value 3 times (to be sure, to 
be sure, to be sure), regardless of what it shows in the plan.

It’s also one of the reasons why I always name constraints. Then you can’t 
accidentally multiple-create them. (Reason #3723924 for not using visual table 
designers).

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Burstin
Sent: Tuesday, 9 February 2016 4:06 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: SQL foreign key question

On 9 February 2016 at 16:02, Greg Low (罗格雷格博士) 
<g...@greglow.com<mailto:g...@greglow.com>> wrote:
You often see tables referring to themselves (due to some sort of hierarchical 
data) but having a column refer to itself seems at best pointless, at worst 
silly.

For me the bigger question is how this actually works. Does the sql engine just 
decide to ignore the constraint because it is so obviously dumb? If so, am I 
the only one bothered by that? I would have actually preferred if no records 
could be added to the table, because at least then I know that the engine is 
respecting my constraints.










And reason #3723923 why I won’t ever use such a visual table designer.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/>

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Tony McGee
Sent: Tuesday, 9 February 2016 3:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: Re: SQL foreign key question


This can sometimes happen to a primary key if you use the visual query designer 
to create a FK relationship and don't change the defaults before clicking OK.

It looks like it's ignored in the insert query execution plan.
On 9 Feb 2016 2:32 pm, "David Burstin" 
<david.burs...@gmail.com<mailto:david.burs...@gmail.com>> wrote:
I came across this (snipped to protect the innocent):

CREATE TABLE [dbo].[V2_BREC_NMIStatusHistory] (
[NMIStatusHistoryId] INT   IDENTITY (1, 1) NOT FOR REPLICATION NOT 
NULL,

CONSTRAINT [PK_V2_BREC_NMIStatusHistory] PRIMARY KEY CLUSTERED 
([NMIStatusHistoryId] ASC),
CONSTRAINT [FK_V2_BREC_NMIStatusHistory_V2_BREC_NMIStatusHistory] FOREIGN 
KEY ([NMIStatusHistoryId]) REFERENCES [dbo].[V2_BREC_NMIStatusHistory] 
([NMIStatusHistoryId])
);


Notice that the primary key identity field has a foreign key constraint on 
itself. How does this work?

I would have thought that any attempt to add a record would check the table for 
the existence of the new key, and as it obviously wouldn’t exist yet, that 
would break the foreign key constraint resulting in the record not being 
written. But, the table has plenty of data.

Anyone have any ideas how this actually works, or does it just do nothing?



Any opinions on the Dell XPS 15 laptops?

2016-02-08 Thread
? As per subject ?

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com



Re: Azure static web sites

2016-01-28 Thread
Saved me a job thanks Andrew !

Regards

Greg

Dr Greg Low
SQL Down Under
+61 419201410
1300SQLSQL (1300775775)

On 28 Jan 2016, at 6:22 PM, Greg Keogh 
> wrote:

https://blogs.msdn.microsoft.com/acoat/2016/01/28/publish-a-static-web-site-using-azure-web-apps/

Oooh! I'll run through that later tonight. I think some of my screens were 
different when I tried similar initial steps, but I'll find out ... Ta, Greg


  1   2   >