On Wed, Apr 26, 2017 at 10:37 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Gotcha. I've got my main code base in V13 still and like it fine.
>
> I still feel behind on this thread...what turned out to be the source of
> the slowdown? Packing? Unpacking? Transmission? Some
> The app is in V13 now and will be moving to V15 over the Summer so
there's no 4D Object available yet.
Gotcha. I've got my main code base in V13 still and like it fine.
I still feel behind on this thread...what turned out to be the source of
the slowdown? Packing? Unpacking? Transmission? Some
On Wed, Apr 26, 2017 at 3:49 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> I just went back to the top of this thread and scanned down...and I think
> that I'm not understanding a key detail. Douglas, you're saying that the
> packed records have 'meta-data', but it sounds like that
I just went back to the top of this thread and scanned down...and I think
that I'm not understanding a key detail. Douglas, you're saying that the
packed records have 'meta-data', but it sounds like that data is a map to
the packing. So, packed data types and offsets, something of that sort.
On Tue, Apr 25, 2017 at 10:12 AM, Tim Nevels via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Here’s an idea. I’m assuming all the record processing is done in a single
> process. How much work would it be to modify the code so that it spawns
> multiple processes that can run at the same time? I
Jim:
SSD - I'm a big believer in SSD's, no question of that. I'm using a MacBook
Pro with a 500 GB SSD. It's a "late 2013" model so it's not as fast as the
newer ones (450 MB/s vs > 1000). The server machine is using SSD's running
Win Server 2008 with a single i7-4770 CPU running at 3.4 GHz. RAM
On Apr 26, 2017, at 5:12 PM, Douglas von Roeder via 4D_Tech
<4d_tech@lists.4d.com> wrote:
> There are many, repetitive method calls. For example, each time the code
> converts a byte range to a longint, it calls a function that returns the
> byte order. As much as I never met a subroutine I
On Tue, Apr 25, 2017 at 6:36 AM, James Crate via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> If you can easily modify the code, you could try commenting the SAVE
> RECORD command(s), and replace any queries for an existing record with
> REDUCE SELECTION($tablePtr->;0). That should be quick and easy
Tim:
There were delays in the code - for whatever reason, the original programer
(not Brad!) had delays of up to 15 seconds in some of processes.
I thought of kicking this out to multiple processes but that's involved.
The data has to follow a strict FIFO sequence so I'd have to examine the
t: Tuesday, April 25, 2017 10:01 AM
To: 4D iNug Technical <4d_tech@lists.4d.com>
Cc: Douglas von Roeder <dvonroe...@gmail.com>
Subject: Re: Experience with FTSY Sync Code//Speed up Sync Code
Randy:
Good summary. This code is slightly more efficient on the transfer because it
On Apr 25, 2017, at 12:01 PM,Douglas von Roeder wrote:
> Some payloads are pretty good sized but I don't recall if compression is
> used. The transmission time is very reasonable - everything just goes in
> the crapper when it comes to unbundling. I haven't timed the decoding vs
> encoding and
Randy:
Good summary. This code is slightly more efficient on the transfer because
it packs multiple records into a given BLOB but after reading your posting,
this issue could be compounded by lack of server processing power.
"Just gotta make sure the client "syncs" often."
At times, some users
Hi Douglas,
I've been using Web Services (SOAP) for quite some time with our
Synchronization module.
I only pack the fields that have changed (or all if new record)
Send an array of field numbers, and text array of string(values)
Pack it all into a blob.
Send 1 record per web service call.
Seems
install a UUID
All records now have unique identifiers
if the data is not there now, implement a 'site id' to determine/track
where the data originated.
use send record, or plain text, or xml to export/import
your done!
:)
Chip
On Mon, 24 Apr 2017 18:25:39 -0700, Douglas von Roeder via
On Apr 24, 2017, at 11:20 PM, Douglas von Roeder via 4D_Tech
<4d_tech@lists.4d.com> wrote:
>
> Updating indexes takes some time but being able to update only 3 - 4
> records per second has got to have some other cause. If you've had positive
> experience with that approach, perhaps I need to
On Mon, Apr 24, 2017 at 8:30 PM, Wayne Stewart via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Mine performs similarly slowly (5-6 records per second) but it sends only
> one record per web service call.
>
> A smarter and less lazy person than me would bunch a few records into the
> one call, use
> The deal breaker is that the code is only updating about 3 records per
> second it can sometimes takes days for the server to catch up.
Ouch, that is slow. I haven't followed closely...if your'e using SOAP, it
has to escape binaries to Base64...which is an absolutely hideous wire
format. If
Mine performs similarly slowly (5-6 records per second) but it sends only
one record per web service call.
A smarter and less lazy person than me would bunch a few records into the
one call, use compression etc. One day I might implement this but you
never can tell, I think beer is more
David:
The transmission time is very manageable — these folks send in data from
very remote locations and the payload always arrives at the server. The
BLOB's are sent via web services and, IIRC, the BLOB's are pretty good
sized, some being over 100k.
The deal breaker is that the code is only
> The current approach has got to be quite inefficient. The problem I've got
> is that I can't come up with anything other than a WAG as to how much
> faster it will be.
It's been some time since I tested out size v. speed relationships in a
similar setup. And, of course, it depends on your
Ron:
Oh yes, big fan of API Pack! I've used B2R and there's also JSON and a few
other approaches.
The issue I need to resolve is how much, if at all, will a different
encoding/deciding approach impact performance?
The current approach has got to be quite inefficient. The problem I've got
is
Doug,
I may be misunderstanding your application, but wouldn’t API Pack’s Record to
Blob and Blob to Record functions work? (It’s from pluggers.nl)
We use that, first compressing then converting the Blob that contains the
entire record to text before sending it as a variable via an HTTP Post.
Doug,
I do something similar.
I use Web services (lazy option). The sync records are in JSON (v13 so I
used NTK later versions use C_OBJECT commands) for the "small" fields and
pack big fields into a Blob.
I can send code if you're interested.
Regards,
Wayne
[image: --]
Wayne Stewart
23 matches
Mail list logo