Hi Steve, > however that is a rather major issue. All you have to do is use the Legacy Network Layer – and the problem is entirely averted.
> What kind of response are you getting with regard to this case? I have a bug report open, and 4D Engineering recently “Thought” they might have fixed it: but when I re-tested, I cannot find that it had improved. However: a caveat on this: I’m deploying many instance of 4D server on a shared machine: I don’t know if that’s part of the issue; but all the other servers - I re-tested with one 4D server on the new network layer, and all the rest on that machine on the legacy layer. - All the legacy 4D servers were zippy: just the one on the New layer behaved bad. I switched that one back to legacy and restarted it (had to kick out 27 users), and then it was fast again. Tony On 9/22/17, 4:03 PM, "Stephen J. Orth" <[email protected]> wrote: Tony, Thanks for sharing this information, however that is a rather major issue. Every one of our customers is well above 20 users, so this does not bode well. What kind of response are you getting with regard to this case? Steve -----Original Message----- From: 4D_Tech [mailto:[email protected]] On Behalf Of Tony Ringsmuth via 4D_Tech Sent: Friday, September 22, 2017 3:19 PM To: 4D Nug <[email protected]> Cc: Tony Ringsmuth <[email protected]> Subject: Re: 4D v16 issues Drew, I’m happy to report that my users are on 16.2, and things are going quite well. I have users to the tune of about 500,000 man-hour/month using instances of the database that I work-on. Only thing I would shy-away from is the new network layer: I’m dealing with a case right now where if you have larger numbers of users (20+): things get slow. Tony ********************************************************************** 4D Internet Users Group (4D iNUG) FAQ: http://lists.4d.com/faqnug.html Archive: http://lists.4d.com/archives.html Options: http://lists.4d.com/mailman/options/4d_tech Unsub: mailto:[email protected] **********************************************************************

