Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-27 Thread Douglas von Roeder via 4D_Tech
On Wed, Apr 26, 2017 at 10:37 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Gotcha. I've got my main code base in V13 still and like it fine.
>
> I still feel behind on this thread...what turned out to be the source of
> the slowdown? Packing? Unpacking? Transmission? Some combination or
> interaction of the above?
>
My suspicion is that the bottlenecks are the decoding and the inability to
Pause indexing.
I'm not thinking that there's much I can do about the decoding. There are a
lot of wrapper routines called as functions that do nothing but return the
value of a 4D constant. Most of those can be eliminated by just using the
4D constant. But the resource cost of loading a method and getting the
value of a constant is very low. And we're talking about updating 3.x
records per second - not 3.x hundreds of records but a Lawrence-Welkian "a
one anna two anna three" records per second.



> I ask because there are alternatives at every
> step. In arm-wavingly broad strokes:
>
> * Reducing the amount you need to transmit: Pays for itself quickly
> (usually.)
> * Reducing the amount you need to compare: Pays for itself quickly (or is
> likely to.)
> * Reducing the _number_ of transmissions: Can also be a big deal.
>
> On the last point, a non-4D example. Try downloading a couple of hundred
> individual files over FTP. Ugh. Takes forever. Now try transmitting those
> as a single archive. Fast. Any chance you could bundle what your remote
> users need in a file *in advance* (or on demand, I guess) and then transmit
> it as a download via a single call? You've got HTTP Get, as an example.
> Then they can unpack it and process it locally, even if their connection is
> closed.
>
No question about that. Performance tests of storage show that 4k blocks
are brutally slow compared to 1 GB blocks. Nature of the beast.
What we're hitting is not transmission time but time to encode/decode and
commit.++
The server is updating 3.5± records per second with a good chip, lots of
RAM, and an SSD. The folks in the field are doing the encoding on laptops
and then they're bringing down records, decoding them, and committing them.
The server can churn away for hours but this is a real pain for the folks
on the laptops.
"Sync early, sync often" is their watchword but there are some users who
wait until they have thousands of changes so they have to be patient and
wait until their computer finished processing.
The phrases "patient" and "wait until the computer is finished processing"
are not phrases that match "sales rep". Not even fuzzy match.



> Again, not sure I'm clear on the story so I may be saying things that are
> kind of irrelevant. Still, from what you say, this is one of those
> situations where some big gains are totally possible. Those are getting
> harder to find these days ;-)
>
Very true. On the other hand, "nothing is slow at 3 GHz".
Oh, wait…



> P.S. For loops are faster, as we all know ;-)
>
Two important things there - always use a longint for your index counter
and set the text for a For loop to a really small font and then color it
red. Works like a charm! ;-)


--
Douglas von Roeder
949-336-2902
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread David Adams via 4D_Tech
> The app is in V13 now and will be moving to V15 over the Summer so
there's no 4D Object available yet.

Gotcha. I've got my main code base in V13 still and like it fine.

I still feel behind on this thread...what turned out to be the source of
the slowdown? Packing? Unpacking? Transmission? Some combination or
interaction of the above? I ask because there are alternatives at every
step. In arm-wavingly broad strokes:

* Reducing the amount you need to transmit: Pays for itself quickly
(usually.)
* Reducing the amount you need to compare: Pays for itself quickly (or is
likely to.)
* Reducing the _number_ of transmissions: Can also be a big deal.

On the last point, a non-4D example. Try downloading a couple of hundred
individual files over FTP. Ugh. Takes forever. Now try transmitting those
as a single archive. Fast. Any chance you could bundle what your remote
users need in a file *in advance* (or on demand, I guess) and then transmit
it as a download via a single call? You've got HTTP Get, as an example.
Then they can unpack it and process it locally, even if their connection is
closed.

Again, not sure I'm clear on the story so I may be saying things that are
kind of irrelevant. Still, from what you say, this is one of those
situations where some big gains are totally possible. Those are getting
harder to find these days ;-)

P.S. For loops are faster, as we all know ;-)
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread Douglas von Roeder via 4D_Tech
On Wed, Apr 26, 2017 at 3:49 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> I just went back to the top of this thread and scanned down...and I think
> that I'm not understanding a key detail. Douglas, you're saying that the
> packed records have 'meta-data', but it sounds like that data is a map to
> the packing. So, packed data types and offsets, something of that sort.
>
"map" is a much better description. The data is "in line" in the BLOB along
with the actual data.




> Would it be possible to re-engineer it so that you have key summary
> information stored somewhere? The ideal way to optimize something slow is
> to not do it at all!
>
Great way to put it.
"re-engineer" - I'm not all that keen on updating this code but, as you
point out, perhaps a small change can yield significant benefit.
It would seem to make far more sense to do something along the lines of,
for example, sending out a longint array where $byteOrder_AL{Type} contains
the Byte order value rather than accessing a wrapper routine for every
field type every time a value is pulled out of the BLOB.
Sheesh, we could even just use the literal value in the code! :-)


> I didn't get a sense what or if that might be. Is
> there a datestamp or s version stamp, or some sort of checksum that you're
> using to figure out if a record needs transmitting? If so, what about
> injecting that into the header, another field, an object field (seems like
> you might have a good use for an object field to serve as a key indicator
> store)? If this would prevent needless unpacking and needless transmission,
> it's potentially a big win. The break-even is that you avoid enough
> unpacking+transmission to pay for the extra storage cost & checks.
> Actually, if it were something searchable, perhaps you could search for the
> records that need sync using a simple index search (or searches) and then
> just bang through the result.
>
> I suspect I'm off the mark here, but just in case...I'm posting.
>
Glad to see you're posting again.

The app is in V13 now and will be moving to V15 over the Summer so there's
no 4D Object available yet.

The code checks each field against Old and, if the field has changed, it's
encoded and bundled into a BLOB. I don't know the algorithm used to crete
additional BLOB's but average number of records updated BLOB is 18 with a
median value of 7 (I've got data on about 700 synch sessions).

In addition to taking the server a while to chew through the records, it's
no picnic for the road warriors. They're on laptops and probably don't have
SSD's (something I didn't think of when I responded to Jim) so what takes
10 minutes on the server takes much longer when the user syncs and has to
unbundle all of these records on their little $1100 Dell laptop.

--
Douglas von Roeder
949-336-2902
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread David Adams via 4D_Tech
I just went back to the top of this thread and scanned down...and I think
that I'm not understanding a key detail. Douglas, you're saying that the
packed records have 'meta-data', but it sounds like that data is a map to
the packing. So, packed data types and offsets, something of that sort.

Would it be possible to re-engineer it so that you have key summary
information stored somewhere? The ideal way to optimize something slow is
to not do it at all! I didn't get a sense what or if that might be. Is
there a datestamp or s version stamp, or some sort of checksum that you're
using to figure out if a record needs transmitting? If so, what about
injecting that into the header, another field, an object field (seems like
you might have a good use for an object field to serve as a key indicator
store)? If this would prevent needless unpacking and needless transmission,
it's potentially a big win. The break-even is that you avoid enough
unpacking+transmission to pay for the extra storage cost & checks.
Actually, if it were something searchable, perhaps you could search for the
records that need sync using a simple index search (or searches) and then
just bang through the result.

I suspect I'm off the mark here, but just in case...I'm posting.
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread Douglas von Roeder via 4D_Tech
On Tue, Apr 25, 2017 at 10:12 AM, Tim Nevels via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Here’s an idea. I’m assuming all the record processing is done in a single
> process. How much work would it be to modify the code so that it spawns
> multiple processes that can run at the same time? I don’t know the code,
> but maybe you could pass that big BLOG off to a method in another process
> and let it do the work. Have 3-4 of these processes all working at the same
> time. I wonder if that would give you performance boost.


Tim:

It's looking like the data format is causing the performance hit so I've
love to split this off across processes or CPU or workstations, for that
matter. The problem I'd hit trying that now is that the BLOB's contain data
from multiple tables and they also contain multiple updates for changes to
a given table so I really have to unpack the BLOB "to find out what's in
it". That's a fair amount of code to right but, to your point, there could
be a big payoff if I could split it across processes. In contrast, my
thinking is that there's a better payoff by simplifying the encode/decode
process. In addition to an anticipated performance boost, I won't need to
run special code just to view the data. Right now, I've got to run another
set of routines to display the data in a human readable form. All in all,
it's a very versatile approach to packaging data but it *is* a pain in the
ass to work with.

--
Douglas von Roeder
949-336-2902
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread Douglas von Roeder via 4D_Tech
Jim:

SSD - I'm a big believer in SSD's, no question of that. I'm using a MacBook
Pro with a 500 GB SSD. It's a "late 2013" model so it's not as fast as the
newer ones (450 MB/s vs > 1000). The server machine is using SSD's running
Win Server 2008 with a single i7-4770 CPU running at 3.4 GHz. RAM is in the
dozens of GB and the data file is about 10 GB.

I checked the stats on the chip at cpuboss.com and it's a pretty healthy
chip ("Nothing is slow at 3 gigahertz.") plus the tables are relatively
small. There are only 2.2M records with 1.2 in the Audit Trail table and
then 600k in Line Items. Indexes have been updated to use cluster B tree
when appropriate.

It seems that hardware's pretty good. I think the code's falling down on
the job.

--
Douglas von Roeder
949-336-2902

On Wed, Apr 26, 2017 at 2:55 PM, James Crate via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> On Apr 26, 2017, at 5:12 PM, Douglas von Roeder via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
>
> > There are many, repetitive method calls. For example, each time the code
> > converts a byte range to a longint, it calls a function that returns the
> > byte order. As much as I never met a subroutine I didn't like, perhaps
> the
> > sheer number of them is impacting performance.
>
> As I mentioned, this shouldn’t matter if the code runs compiled.  It will
> definitely make a difference interpreted.
>
> If you’re running compiled, it’s likely the speed issue is related to
> running 4D on a spinning hard drive. An SSD covers up a multitude of sins,
> and is cheap enough that now there is no sensible reason to not use one. An
> 128GB SSD, big enough for most normal 4D databases, probably costs less
> than an hour of your time, and will likely fix not only this problem, but
> others as well.
>
> For example, a process which deletes a bunch of data from a 4GB datafile
> so it could be used in the standalone version used by field reps used to
> take 45 minutes on a MacBookPro with 7200rpm drive. After installing an
> SSD, that process was about 5 minutes. On a newer MBP with PCIe SSD, it
> takes 90 seconds.
>
> FWIW, I wrote a few methods to work with Cannon’s OBJ module to pack
> selections of records for multiple tables to a C_OBJECT along with their
> table specifications, and unpack into the same fields (by number, with type
> conversion if necessary). It doesn’t follow relations or handle subtables.
> For unpacking you get a count of records for a table and unpack
> individually by index. Very few method calls, runs fast both interpreted
> and compiled. Maybe I’ll push that up to my Github account tonight.
>
> Jim Crate
>
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread James Crate via 4D_Tech
On Apr 26, 2017, at 5:12 PM, Douglas von Roeder via 4D_Tech 
<4d_tech@lists.4d.com> wrote:

> There are many, repetitive method calls. For example, each time the code
> converts a byte range to a longint, it calls a function that returns the
> byte order. As much as I never met a subroutine I didn't like, perhaps the
> sheer number of them is impacting performance.

As I mentioned, this shouldn’t matter if the code runs compiled.  It will 
definitely make a difference interpreted. 

If you’re running compiled, it’s likely the speed issue is related to running 
4D on a spinning hard drive. An SSD covers up a multitude of sins, and is cheap 
enough that now there is no sensible reason to not use one. An 128GB SSD, big 
enough for most normal 4D databases, probably costs less than an hour of your 
time, and will likely fix not only this problem, but others as well.

For example, a process which deletes a bunch of data from a 4GB datafile so it 
could be used in the standalone version used by field reps used to take 45 
minutes on a MacBookPro with 7200rpm drive. After installing an SSD, that 
process was about 5 minutes. On a newer MBP with PCIe SSD, it takes 90 seconds.

FWIW, I wrote a few methods to work with Cannon’s OBJ module to pack selections 
of records for multiple tables to a C_OBJECT along with their table 
specifications, and unpack into the same fields (by number, with type 
conversion if necessary). It doesn’t follow relations or handle subtables. For 
unpacking you get a count of records for a table and unpack individually by 
index. Very few method calls, runs fast both interpreted and compiled. Maybe 
I’ll push that up to my Github account tonight.

Jim Crate

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-26 Thread Douglas von Roeder via 4D_Tech
On Tue, Apr 25, 2017 at 6:36 AM, James Crate via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> If you can easily modify the code, you could try commenting the SAVE
> RECORD command(s), and replace any queries for an existing record with
> REDUCE SELECTION($tablePtr->;0). That should be quick and easy and will
> show the speed of the unpacking only.  Alternatively, you could test
> importing into an empty database, which should remove the impact of queries
> and index updates.
>
> If there are many method calls and much pointer dereferencing, 4D will be
> very slow interpreted but much faster compiled.
>

Jim:

Excellent suggestions. Thank you.

There are many, repetitive method calls. For example, each time the code
converts a byte range to a longint, it calls a function that returns the
byte order. As much as I never met a subroutine I didn't like, perhaps the
sheer number of them is impacting performance.

The idea of run a synch session and committing records is a winner. I'll
give it a try.

--
Douglas von Roeder
949-336-2902 <(949)%20336-2902>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Douglas von Roeder via 4D_Tech
Tim:

There were delays in the code - for whatever reason, the original programer
(not Brad!) had delays of up to 15 seconds in some of processes.

I thought of kicking this out to multiple processes but that's involved.
The data has to follow a strict FIFO sequence so I'd have to examine the
BLOB's to ensure that sequence was followed. I can modify the metadata so
that I can avoid sequencing errors but if I modify how the data is encoded,
I'm leaning toward swapping out the BLOB code for JSON-ish code and seeing
what kind of throughput I get.

Per my email to Randy, server horsepower is a contributing factor so
perhaps this is a good reason for the client to upgrade their server and
see what benefits we get.

--
Douglas von Roeder
949-336-2902

On Tue, Apr 25, 2017 at 10:12 AM, Tim Nevels via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> On Apr 25, 2017, at 12:01 PM,Douglas von Roeder wrote:
>
> > Some payloads are pretty good sized but I don't recall if compression is
> > used. The transmission time is very reasonable - everything just goes in
> > the crapper when it comes to unbundling. I haven't timed the decoding vs
> > encoding and then vs actually writing to disk. That might provide some
> > insight.
>
> Here’s an idea. I’m assuming all the record processing is done in a single
> process. How much work would it be to modify the code so that it spawns
> multiple processes that can run at the same time? I don’t know the code,
> but maybe you could pass that big BLOG off to a method in another process
> and let it do the work. Have 3-4 of these processes all working at the same
> time. I wonder if that would give you performance boost.
>
> Have you checked to code to see if there is any throttling going on? Maybe
> there are some DELAY PROCESS commands sprinkled around to keep the sync
> process from saturating the server.
>
> Did you say it was running as a stored procedure, or is it running on 4D
> Client.
>
> Tim
>
> 
> Tim Nevels
> Innovative Solutions
> 785-749-3444
> timnev...@mac.com
> 
>
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

RE: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Randy Engle via 4D_Tech
Douglas,

Synching is a wonderful thing.

Most users think it's a magic bullet.   ;-)

Yes, it does have some "considerations"

Randy Engle
XC2 Software LLC


-Original Message-
From: 4D_Tech [mailto:4d_tech-boun...@lists.4d.com] On Behalf Of Douglas von 
Roeder via 4D_Tech
Sent: Tuesday, April 25, 2017 10:01 AM
To: 4D iNug Technical <4d_tech@lists.4d.com>
Cc: Douglas von Roeder <dvonroe...@gmail.com>
Subject: Re: Experience with FTSY Sync Code//Speed up Sync Code

Randy:

Good summary. This code is slightly more efficient on the transfer because it 
packs multiple records into a given BLOB but after reading your posting, this 
issue could be compounded by lack of server processing power.

"Just gotta make sure the client "syncs" often."
At times, some users dump thousands of records so perhaps the server's getting 
overwhelmed.

Given that the computing resources used by the server to *de*code all those 
BLOB's will be in the ballpark of the sum of the resources that all of the 
standalones used to *en*code all of that data, if the standalones don't "synch 
early, synch often", that's going to put a significant load on the server 
machine. The more complex the encode/decode, the worse things are for the 
server. And it the server's underpowered, large intermittent synch sessions 
will exacerbate the situation.

Maybe one way to look at this is that this is an inverse "distributed 
processing" situation - a single server has to reverse all of the processing 
that's been done by N client workstations.


--
Douglas von Roeder
949-336-2902

On Tue, Apr 25, 2017 at 8:40 AM, Randy Engle via 4D_Tech < 
4d_tech@lists.4d.com> wrote:

> Hi Douglas,
>
> I've been using Web Services (SOAP) for quite some time with our 
> Synchronization module.
> I only pack the fields that have changed (or all if new record) Send 
> an array of field numbers, and text array of string(values) Pack it 
> all into a blob.
> Send 1 record per web service call.
> Seems fast enough for our purposes.  5-10+ records per second, 
> depending upon record size and network.
> Just gotta make sure the client "syncs" often.
>
> Randy Engle
> XC2 Software LLC
>
> -Original Message-
> From: 4D_Tech [mailto:4d_tech-boun...@lists.4d.com] On Behalf Of 
> Douglas von Roeder via 4D_Tech
> Sent: Monday, April 24, 2017 6:26 PM
> To: 4D iNug Technical <4d_tech@lists.4d.com>
> Cc: Douglas von Roeder <dvonroe...@gmail.com>
> Subject: Experience with FTSY Sync Code//Speed up Sync Code
>
> Anyone here have experience with Brad Weber's "FTSY Sync" code?
>
> The code in question was written almost 20 years ago to synchronize 
> records between standalones and a client server system, and I know 
> that is was used by a couple of companies inclduing Husqvarna in North 
> Carolina.
>
> One aspect of the code that's challenging is that the V11+ code (the "new"
> code) could no longer use 4D Open so the design was changed to pack 
> field data into BLOB's. The BLOB's contain metadata for every field 
> including the field number, the field type, the data length, etc.
>
> When the synch records are unpacked, the metadata is used to move 
> sequentially through the BLOB, converting each byte range back to its 
> native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
>
> My suspicion is that this method of encoding/decoding is contributing 
> to poor performance* for updating records and I'm hoping that someone 
> has resolved this issue.
>
>
>
> The underlying question is how much faster/slower would it be to 
> encode/decode data using an alternative method?
>
> A much more simple alternative is to use a "field ID" (String(Table 
> number;"000")+ the ID (a string)" as the tag/property name and use OB 
> Get/Set(property;data;field type) to deal with the data.
>
> This approach would eliminate a significant amount of code, no 
> question, but what would be the impact on performance?
>
> Comments, thoughts, and questions appreciated.
>
>
> *this is a V13 system so I can't use Pause index
>
>
>
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tec

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Tim Nevels via 4D_Tech
On Apr 25, 2017, at 12:01 PM,Douglas von Roeder wrote:

> Some payloads are pretty good sized but I don't recall if compression is
> used. The transmission time is very reasonable - everything just goes in
> the crapper when it comes to unbundling. I haven't timed the decoding vs
> encoding and then vs actually writing to disk. That might provide some
> insight.

Here’s an idea. I’m assuming all the record processing is done in a single 
process. How much work would it be to modify the code so that it spawns 
multiple processes that can run at the same time? I don’t know the code, but 
maybe you could pass that big BLOG off to a method in another process and let 
it do the work. Have 3-4 of these processes all working at the same time. I 
wonder if that would give you performance boost. 

Have you checked to code to see if there is any throttling going on? Maybe 
there are some DELAY PROCESS commands sprinkled around to keep the sync process 
from saturating the server. 

Did you say it was running as a stored procedure, or is it running on 4D 
Client. 

Tim


Tim Nevels
Innovative Solutions
785-749-3444
timnev...@mac.com


**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Douglas von Roeder via 4D_Tech
Randy:

Good summary. This code is slightly more efficient on the transfer because
it packs multiple records into a given BLOB but after reading your posting,
this issue could be compounded by lack of server processing power.

"Just gotta make sure the client "syncs" often."
At times, some users dump thousands of records so perhaps the server's
getting overwhelmed.

Given that the computing resources used by the server to *de*code all those
BLOB's will be in the ballpark of the sum of the resources that all of the
standalones used to *en*code all of that data, if the standalones don't
"synch early, synch often", that's going to put a significant load on the
server machine. The more complex the encode/decode, the worse things are
for the server. And it the server's underpowered, large intermittent synch
sessions will exacerbate the situation.

Maybe one way to look at this is that this is an inverse "distributed
processing" situation - a single server has to reverse all of
the processing that's been done by N client workstations.


--
Douglas von Roeder
949-336-2902

On Tue, Apr 25, 2017 at 8:40 AM, Randy Engle via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Hi Douglas,
>
> I've been using Web Services (SOAP) for quite some time with our
> Synchronization module.
> I only pack the fields that have changed (or all if new record)
> Send an array of field numbers, and text array of string(values)
> Pack it all into a blob.
> Send 1 record per web service call.
> Seems fast enough for our purposes.  5-10+ records per second, depending
> upon record size and network.
> Just gotta make sure the client "syncs" often.
>
> Randy Engle
> XC2 Software LLC
>
> -Original Message-
> From: 4D_Tech [mailto:4d_tech-boun...@lists.4d.com] On Behalf Of Douglas
> von Roeder via 4D_Tech
> Sent: Monday, April 24, 2017 6:26 PM
> To: 4D iNug Technical <4d_tech@lists.4d.com>
> Cc: Douglas von Roeder 
> Subject: Experience with FTSY Sync Code//Speed up Sync Code
>
> Anyone here have experience with Brad Weber's "FTSY Sync" code?
>
> The code in question was written almost 20 years ago to synchronize
> records between standalones and a client server system, and I know that is
> was used by a couple of companies inclduing Husqvarna in North Carolina.
>
> One aspect of the code that's challenging is that the V11+ code (the "new"
> code) could no longer use 4D Open so the design was changed to pack field
> data into BLOB's. The BLOB's contain metadata for every field including the
> field number, the field type, the data length, etc.
>
> When the synch records are unpacked, the metadata is used to move
> sequentially through the BLOB, converting each byte range back to its
> native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
>
> My suspicion is that this method of encoding/decoding is contributing to
> poor performance* for updating records and I'm hoping that someone has
> resolved this issue.
>
>
>
> The underlying question is how much faster/slower would it be to
> encode/decode data using an alternative method?
>
> A much more simple alternative is to use a "field ID" (String(Table
> number;"000")+ the ID (a string)" as the tag/property name and use OB
> Get/Set(property;data;field type) to deal with the data.
>
> This approach would eliminate a significant amount of code, no question,
> but what would be the impact on performance?
>
> Comments, thoughts, and questions appreciated.
>
>
> *this is a V13 system so I can't use Pause index
>
>
>
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

RE: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Randy Engle via 4D_Tech
Hi Douglas,

I've been using Web Services (SOAP) for quite some time with our 
Synchronization module.
I only pack the fields that have changed (or all if new record)
Send an array of field numbers, and text array of string(values)
Pack it all into a blob.
Send 1 record per web service call.
Seems fast enough for our purposes.  5-10+ records per second, depending upon 
record size and network.
Just gotta make sure the client "syncs" often.

Randy Engle
XC2 Software LLC

-Original Message-
From: 4D_Tech [mailto:4d_tech-boun...@lists.4d.com] On Behalf Of Douglas von 
Roeder via 4D_Tech
Sent: Monday, April 24, 2017 6:26 PM
To: 4D iNug Technical <4d_tech@lists.4d.com>
Cc: Douglas von Roeder 
Subject: Experience with FTSY Sync Code//Speed up Sync Code

Anyone here have experience with Brad Weber's "FTSY Sync" code?

The code in question was written almost 20 years ago to synchronize records 
between standalones and a client server system, and I know that is was used by 
a couple of companies inclduing Husqvarna in North Carolina.

One aspect of the code that's challenging is that the V11+ code (the "new"
code) could no longer use 4D Open so the design was changed to pack field data 
into BLOB's. The BLOB's contain metadata for every field including the field 
number, the field type, the data length, etc.

When the synch records are unpacked, the metadata is used to move sequentially 
through the BLOB, converting each byte range back to its native 4D type using 
BLOB to text, BLOB to real, BLOB to longint, etc.

My suspicion is that this method of encoding/decoding is contributing to poor 
performance* for updating records and I'm hoping that someone has resolved this 
issue.



The underlying question is how much faster/slower would it be to encode/decode 
data using an alternative method?

A much more simple alternative is to use a "field ID" (String(Table 
number;"000")+ the ID (a string)" as the tag/property name and use OB 
Get/Set(property;data;field type) to deal with the data.

This approach would eliminate a significant amount of code, no question, but 
what would be the impact on performance?

Comments, thoughts, and questions appreciated.


*this is a V13 system so I can't use Pause index



**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread Chip Scheide via 4D_Tech
install a UUID 
All records now have unique identifiers
if the data is not there now, implement a 'site id' to determine/track 
where the data originated.

use send record, or plain text, or xml to export/import
your done!

:)
Chip

On Mon, 24 Apr 2017 18:25:39 -0700, Douglas von Roeder via 4D_Tech 
wrote:
> 
> A much more simple alternative is to use a "field ID" (String(Table
> number;"000")+ the ID (a string)" as the tag/property name and use OB
> Get/Set(property;data;field type) to deal with the data.
---
Gas is for washing parts
Alcohol is for drinkin'
Nitromethane is for racing 
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-25 Thread James Crate via 4D_Tech
On Apr 24, 2017, at 11:20 PM, Douglas von Roeder via 4D_Tech 
<4d_tech@lists.4d.com> wrote:
> 
> Updating indexes takes some time but being able to update only 3 - 4
> records per second has got to have some other cause. If you've had positive
> experience with that approach, perhaps I need to look for some
> other factor.

If you can easily modify the code, you could try commenting the SAVE RECORD 
command(s), and replace any queries for an existing record with REDUCE 
SELECTION($tablePtr->;0). That should be quick and easy and will show the speed 
of the unpacking only.  Alternatively, you could test importing into an empty 
database, which should remove the impact of queries and index updates.

If there are many method calls and much pointer dereferencing, 4D will be very 
slow interpreted but much faster compiled.

Jim Crate

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Douglas von Roeder via 4D_Tech
On Mon, Apr 24, 2017 at 8:30 PM, Wayne Stewart via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Mine performs similarly slowly (5-6 records per second) but it sends only
> one record per web service call.
>
> A smarter and less lazy person than me would bunch a few records into the
> one call, use compression etc.  One day I might implement this but you
> never can tell, I think beer is more interesting.
>

As you're pointing out, your code is not optimized yet you're getting twice
as many records. Something seems amiss.

The payload is sent in a single web service call with data from multiple
records being packed into the BLOB sequentially. It's complex code,
elegantly written.

Some payloads are pretty good sized but I don't recall if compression is
used. The transmission time is very reasonable - everything just goes in
the crapper when it comes to unbundling. I haven't timed the decoding vs
encoding and then vs actually writing to disk. That might provide some
insight.

--
Douglas von Roeder
949-336-2902
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread David Adams via 4D_Tech
> The deal breaker is that the code is only updating about 3 records per
> second it can sometimes takes days for the server to catch up.

Ouch,  that is slow. I haven't followed closely...if your'e using SOAP, it
has to escape binaries to Base64...which is an absolutely hideous wire
format. If your'e trying to send a lot of binary data, base 64 is *not*
your friend. It works, but it makes your payloads just stupidly large.
Unless I'm wrong and you're getting the space back via compression. I think
that 4D added a special option for 4D:4D SOAP communications for
exactly this case.

You mentioned 4D Open. One of its strengths was using 4D's (binary) wire
format.

If you need to send binary objects in a smarter way, there are
options...but, again, I suspect I'm well off the main point here.
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Wayne Stewart via 4D_Tech
Mine performs similarly slowly (5-6 records per second) but it sends only
one record per web service call.

A smarter and less lazy person than me would bunch a few records into the
one call, use compression etc.  One day I might implement this but you
never can tell, I think beer is more interesting.


Regards,

Wayne


[image: --]
Wayne Stewart
[image: http://]about.me/waynestewart



On 25 April 2017 at 13:20, Douglas von Roeder via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Jim:
>
> "I wrote similar code a long time ago, and just replaced it last year (it
> stored the metadata in the resource fork, which was going to be problematic
> soon). Exports usually contained hundreds of records of 3-4 tables of 100+
> fields, and when importing were parsed pretty much instantly.
>
> Unless the unpacking code is very very inefficient, it’s not likely to be
> the source of any noticeable slowness when compiled."
>
> Updating indexes takes some time but being able to update only 3 - 4
> records per second has got to have some other cause. If you've had positive
> experience with that approach, perhaps I need to look for some
> other factor.
>
>
> --
> Douglas von Roeder
> 949-336-2902
>
> On Mon, Apr 24, 2017 at 7:46 PM, James Crate via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
>
> > On Apr 24, 2017, at 9:25 PM, Douglas von Roeder via 4D_Tech <
> > 4d_tech@lists.4d.com> wrote:
> >
> > > Anyone here have experience with Brad Weber's "FTSY Sync" code?
> > >
> > > One aspect of the code that's challenging is that the V11+ code (the
> > "new"
> > > code) could no longer use 4D Open so the design was changed to pack
> field
> > > data into BLOB's. The BLOB's contain metadata for every field including
> > the
> > > field number, the field type, the data length, etc.
> > >
> > > When the synch records are unpacked, the metadata is used to move
> > > sequentially through the BLOB, converting each byte range back to its
> > > native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
> >
> > [snip]
> >
> > > The underlying question is how much faster/slower would it be to
> > > encode/decode data using an alternative method?
> >
> > I wrote similar code a long time ago, and just replaced it last year (it
> > stored the metadata in the resource fork, which was going to be
> problematic
> > soon). Exports usually contained hundreds of records of 3-4 tables of
> 100+
> > fields, and when importing were parsed pretty much instantly.
> >
> > Unless the unpacking code is very very inefficient, it’s not likely to be
> > the source of any noticeable slowness when compiled.
> >
> > Jim Crate
> >
> > **
> > 4D Internet Users Group (4D iNUG)
> > FAQ:  http://lists.4d.com/faqnug.html
> > Archive:  http://lists.4d.com/archives.html
> > Options: http://lists.4d.com/mailman/options/4d_tech
> > Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> > **
> >
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Douglas von Roeder via 4D_Tech
David:

The transmission time is very manageable — these folks send in data from
very remote locations and the payload always arrives at the server. The
BLOB's are sent via web services and, IIRC, the BLOB's are pretty good
sized, some being over 100k.

The deal breaker is that the code is only updating about 3 records per
second it can sometimes takes days for the server to catch up.

--
Douglas von Roeder
949-336-2902

On Mon, Apr 24, 2017 at 7:42 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> > The current approach has got to be quite inefficient. The problem I've
> got
> > is that I can't come up with anything other than a WAG as to how much
> > faster it will be.
>
> It's been some time since I tested out size v. speed relationships in a
> similar setup. And, of course, it depends  on your hardware, network, etc.
> But, when I last tested it, the size of the download correlated pretty
> exactly with the download time. I mean end-to-end. So, the overhead on
> compression paid for itself nearly instantly. (The overhead proved to be
> small.) 30% smaller payload, 30% less time required to download it.
>
> Like you, I wasn't clear what the breakeven point was for the expense of
> compression. I couldn't find a place where compression was a bad bet.
>
> Assuming I'm on track with what you guys are talking about and
> haven't wandered off into the woods again...
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread David Adams via 4D_Tech
> The current approach has got to be quite inefficient. The problem I've got
> is that I can't come up with anything other than a WAG as to how much
> faster it will be.

It's been some time since I tested out size v. speed relationships in a
similar setup. And, of course, it depends  on your hardware, network, etc.
But, when I last tested it, the size of the download correlated pretty
exactly with the download time. I mean end-to-end. So, the overhead on
compression paid for itself nearly instantly. (The overhead proved to be
small.) 30% smaller payload, 30% less time required to download it.

Like you, I wasn't clear what the breakeven point was for the expense of
compression. I couldn't find a place where compression was a bad bet.

Assuming I'm on track with what you guys are talking about and
haven't wandered off into the woods again...
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Douglas von Roeder via 4D_Tech
Ron:

Oh yes, big fan of API Pack! I've used B2R and there's also JSON and a few
other approaches.

The issue I need to resolve is how much, if at all, will a different
encoding/deciding approach impact performance?

The current approach has got to be quite inefficient. The problem I've got
is that I can't come up with anything other than a WAG as to how much
faster it will be.

On Mon, Apr 24, 2017 at 7:06 PM Ronald Rosell via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Doug,
>
> I may be misunderstanding your application, but wouldn’t API Pack’s Record
> to Blob and Blob to Record functions work?  (It’s from pluggers.nl)
>
> We use that, first compressing then converting the Blob that contains the
> entire record to text before sending it as a variable via an HTTP Post.
> So, on the sending side the pseudocode is:
>
> $err:=API Record to Blob ($tablenum;$record)
> COMPRESS BLOB ($record)
> BASE64 ENCODE ($record)
> $varToSend:= BLOB to text ($record;UTF8 C string)
>
> On the other end, the variable is converted back to a Blob and then we use
> Blob to Record to create a 4D record with all of the fields.
>
> TEXT TO BLOB($record;$recblob;UTF8 C string)
> BASE64 DECODE($recblob)
> EXPAND BLOB($recblob)
>  // do some stuff here using other passed variables to
> identify/load the table and record, or create a new record, per below, and
> then
> $err:=API Blob To Record ($tablenum;$recblob)
>
> Along with the variable containing the record data, the HTTP post contains
> a few other variables including the table number, the key index field and
> the key index value, so the receiving database knows what table it’s
> updating and whether it’s revising an existing record or adding a new one.
> Using UUID’s instead of sequence numbers as the key field takes care of any
> issue of duplication between locally-generated records and received records
> from a remote system. Other than that we don’t need metadata for each
> field, as Record to Blob and Blob to Record handle that.
>
> Hope this helps!
>
> Ron Rosell
>
>
> > On Apr 24, 2017, at 6:42 PM, Wayne Stewart via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
> >
> > Doug,
> >
> > I do something similar.
> >
> > I use Web services (lazy option).  The sync records are in JSON (v13 so I
> > used NTK later versions use C_OBJECT commands) for the "small" fields and
> > pack big fields into a Blob.
> >
> > I can send code if you're interested.
> >
> >
> >
> > Regards,
> >
> > Wayne
> >
> >
> > [image: --]
> > Wayne Stewart
> > [image: http://]about.me/waynestewart
> > 
> >
> >
> > On 25 April 2017 at 11:25, Douglas von Roeder via 4D_Tech <
> > 4d_tech@lists.4d.com> wrote:
> >
> >> Anyone here have experience with Brad Weber's "FTSY Sync" code?
> >>
> >> The code in question was written almost 20 years ago to synchronize
> records
> >> between standalones and a client server system, and I know that is was
> used
> >> by a couple of companies inclduing Husqvarna in North Carolina.
> >>
> >> One aspect of the code that's challenging is that the V11+ code (the
> "new"
> >> code) could no longer use 4D Open so the design was changed to pack
> field
> >> data into BLOB's. The BLOB's contain metadata for every field including
> the
> >> field number, the field type, the data length, etc.
> >>
> >> When the synch records are unpacked, the metadata is used to move
> >> sequentially through the BLOB, converting each byte range back to its
> >> native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
> >>
> >> My suspicion is that this method of encoding/decoding is contributing to
> >> poor performance* for updating records and I'm hoping that someone has
> >> resolved this issue.
> >>
> >>
> >>
> >> The underlying question is how much faster/slower would it be to
> >> encode/decode data using an alternative method?
> >>
> >> A much more simple alternative is to use a "field ID" (String(Table
> >> number;"000")+ the ID (a string)" as the tag/property name and use OB
> >> Get/Set(property;data;field type) to deal with the data.
> >>
> >> This approach would eliminate a significant amount of code, no question,
> >> but what would be the impact on performance?
> >>
> >> Comments, thoughts, and questions appreciated.
> >>
> >>
> >> *this is a V13 system so I can't use Pause index
> >>
> >> --
> >> Douglas von Roeder
> >> 949-336-2902
> >> **
> >> 4D Internet Users Group (4D iNUG)
> >> FAQ:  http://lists.4d.com/faqnug.html
> >> Archive:  http://lists.4d.com/archives.html
> >> Options: http://lists.4d.com/mailman/options/4d_tech
> >> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> >> **
> > **
> > 4D Internet Users Group (4D iNUG)
> > FAQ:  http://lists.4d.com/faqnug.html
> > Archive:  http://lists.4d.com/archives.html
> > Options: 

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Ronald Rosell via 4D_Tech
Doug,

I may be misunderstanding your application, but wouldn’t API Pack’s Record to 
Blob and Blob to Record functions work?  (It’s from pluggers.nl)

We use that, first compressing then converting the Blob that contains the 
entire record to text before sending it as a variable via an HTTP Post.  So, on 
the sending side the pseudocode is:

$err:=API Record to Blob ($tablenum;$record)
COMPRESS BLOB ($record)
BASE64 ENCODE ($record)
$varToSend:= BLOB to text ($record;UTF8 C string)

On the other end, the variable is converted back to a Blob and then we use Blob 
to Record to create a 4D record with all of the fields.  

TEXT TO BLOB($record;$recblob;UTF8 C string)
BASE64 DECODE($recblob)
EXPAND BLOB($recblob)
 // do some stuff here using other passed variables to identify/load 
the table and record, or create a new record, per below, and then
$err:=API Blob To Record ($tablenum;$recblob)

Along with the variable containing the record data, the HTTP post contains a 
few other variables including the table number, the key index field and the key 
index value, so the receiving database knows what table it’s updating and 
whether it’s revising an existing record or adding a new one.  Using UUID’s 
instead of sequence numbers as the key field takes care of any issue of 
duplication between locally-generated records and received records from a 
remote system. Other than that we don’t need metadata for each field, as Record 
to Blob and Blob to Record handle that.

Hope this helps!

Ron Rosell


> On Apr 24, 2017, at 6:42 PM, Wayne Stewart via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> Doug,
> 
> I do something similar.
> 
> I use Web services (lazy option).  The sync records are in JSON (v13 so I
> used NTK later versions use C_OBJECT commands) for the "small" fields and
> pack big fields into a Blob.
> 
> I can send code if you're interested.
> 
> 
> 
> Regards,
> 
> Wayne
> 
> 
> [image: --]
> Wayne Stewart
> [image: http://]about.me/waynestewart
> 
> 
> 
> On 25 April 2017 at 11:25, Douglas von Roeder via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
> 
>> Anyone here have experience with Brad Weber's "FTSY Sync" code?
>> 
>> The code in question was written almost 20 years ago to synchronize records
>> between standalones and a client server system, and I know that is was used
>> by a couple of companies inclduing Husqvarna in North Carolina.
>> 
>> One aspect of the code that's challenging is that the V11+ code (the "new"
>> code) could no longer use 4D Open so the design was changed to pack field
>> data into BLOB's. The BLOB's contain metadata for every field including the
>> field number, the field type, the data length, etc.
>> 
>> When the synch records are unpacked, the metadata is used to move
>> sequentially through the BLOB, converting each byte range back to its
>> native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
>> 
>> My suspicion is that this method of encoding/decoding is contributing to
>> poor performance* for updating records and I'm hoping that someone has
>> resolved this issue.
>> 
>> 
>> 
>> The underlying question is how much faster/slower would it be to
>> encode/decode data using an alternative method?
>> 
>> A much more simple alternative is to use a "field ID" (String(Table
>> number;"000")+ the ID (a string)" as the tag/property name and use OB
>> Get/Set(property;data;field type) to deal with the data.
>> 
>> This approach would eliminate a significant amount of code, no question,
>> but what would be the impact on performance?
>> 
>> Comments, thoughts, and questions appreciated.
>> 
>> 
>> *this is a V13 system so I can't use Pause index
>> 
>> --
>> Douglas von Roeder
>> 949-336-2902
>> **
>> 4D Internet Users Group (4D iNUG)
>> FAQ:  http://lists.4d.com/faqnug.html
>> Archive:  http://lists.4d.com/archives.html
>> Options: http://lists.4d.com/mailman/options/4d_tech
>> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
>> **
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

__

Ron Rosell
President
StreamLMS

301-3537 Oak Street
Vancouver, BC V6H 2M1
Canada

Direct phone (all numbers reach me)
Vancouver: (+1) (604) 628-1933  |  Seattle: (+1) (425) 956-3570  |  Palm Beach: 
(+1) (561) 351-6210 
email: r...@streamlms.com  |  fax: (+1) (815) 301-9058  |  Skype: ronrosell

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: 

Re: Experience with FTSY Sync Code//Speed up Sync Code

2017-04-24 Thread Wayne Stewart via 4D_Tech
Doug,

I do something similar.

I use Web services (lazy option).  The sync records are in JSON (v13 so I
used NTK later versions use C_OBJECT commands) for the "small" fields and
pack big fields into a Blob.

I can send code if you're interested.



Regards,

Wayne


[image: --]
Wayne Stewart
[image: http://]about.me/waynestewart



On 25 April 2017 at 11:25, Douglas von Roeder via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Anyone here have experience with Brad Weber's "FTSY Sync" code?
>
> The code in question was written almost 20 years ago to synchronize records
> between standalones and a client server system, and I know that is was used
> by a couple of companies inclduing Husqvarna in North Carolina.
>
> One aspect of the code that's challenging is that the V11+ code (the "new"
> code) could no longer use 4D Open so the design was changed to pack field
> data into BLOB's. The BLOB's contain metadata for every field including the
> field number, the field type, the data length, etc.
>
> When the synch records are unpacked, the metadata is used to move
> sequentially through the BLOB, converting each byte range back to its
> native 4D type using BLOB to text, BLOB to real, BLOB to longint, etc.
>
> My suspicion is that this method of encoding/decoding is contributing to
> poor performance* for updating records and I'm hoping that someone has
> resolved this issue.
>
>
>
> The underlying question is how much faster/slower would it be to
> encode/decode data using an alternative method?
>
> A much more simple alternative is to use a "field ID" (String(Table
> number;"000")+ the ID (a string)" as the tag/property name and use OB
> Get/Set(property;data;field type) to deal with the data.
>
> This approach would eliminate a significant amount of code, no question,
> but what would be the impact on performance?
>
> Comments, thoughts, and questions appreciated.
>
>
> *this is a V13 system so I can't use Pause index
>
> --
> Douglas von Roeder
> 949-336-2902
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**