Re: [Firebird-devel] Plans for 3.0.8?

2021-09-09 Thread Slavomir Skopalik

No, nobody talking about bug report.

The topics are:

1. Customers don't want to have unofficial version.

2. Snapshot are not tested by firebird users

3. Missing official change log (whats new) causes that you will not pass 
audit


4. Regressions in code (remember FB2.5)

And if snapshot has quality of release, why is not released more often 
officially?


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
MASA - Collection and evaluation of data from machines and laboratories
http://eng.elektlabs.com/products-and-services/masa
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.com

On 9.09.21 18:19, Dimitry Sibiryakov wrote:

Slavomir Skopalik wrote 09.09.2021 18:12:

But this is automated tests, not a real experiences.


  Automated tests were created after reported bugs. If you see no test 
for a bug - nobody reported it, right?






Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Plans for 3.0.8?

2021-09-09 Thread Slavomir Skopalik

But this is automated tests, not a real experiences.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
MASA - Collection and evaluation of data from machines and laboratories
http://eng.elektlabs.com/products-and-services/masa
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.com

On 9.09.21 18:02, Dimitry Sibiryakov wrote:

Slavomir Skopalik wrote 09.09.2021 17:54:

Finally, the releases are test by large number of users, but snapshots?


  http://firebirdtest.com





Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Plans for 3.0.8?

2021-09-09 Thread Slavomir Skopalik
From customer point of view, if we are using latest official release, 
we do the best what we can.


If we are using snapshots, is the same as will do our self modification.

And by the way in many cases not all snapshot are stable.

You can see post commits or reverts.

Finally, the releases are test by large number of users, but snapshots?

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
MASA - Collection and evaluation of data from machines and laboratories
http://eng.elektlabs.com/products-and-services/masa
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.com

On 9.09.21 13:35, Dimitry Sibiryakov wrote:

Slavomir Skopalik wrote 09.09.2021 12:11:
if you will put snapshots into production, you will be responsible 
for everything.


  It sounds as if Firebird project can take liability for release 
builds...


The another point of view is, that snapshot is not tested and 
verified by public.


  Test suites are public. Anyone can run them against any build.





Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Plans for 3.0.8?

2021-09-09 Thread Slavomir Skopalik

Sorry to say,

if you will put snapshots into production, you will be responsible for 
everything.


The another point of view is, that snapshot is not tested and verified 
by public.


For my customers, no way to go.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
MASA - Collection and evaluation of data from machines and laboratories
http://eng.elektlabs.com/products-and-services/masa
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.com

On 9.09.21 11:57, Dimitry Sibiryakov wrote:

Gabor Boros wrote 09.09.2021 9:36:

3.0.7 released in October 2020.


  Snapshots are released daily.





Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Generating transactions

2019-08-09 Thread Slavomir Skopalik
In case, that you use transaction ID for business logic and needs to 
only going up.


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
MASA - Collection and evaluation of data from machines and laboratories
http://eng.elektlabs.cz/products-and-services/masa
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 9.08.19 12:46, Adriano dos Santos Fernandes wrote:

On 09/08/2019 07:37, liviuslivius wrote:

Hi
  
i have checked this now and i see this was in Interbase:

  -START(ING_TRANS)     starting transaction ID for restore
  
Maybe it is time to add this also to Firebird?



Other than engine debug of greater IDs, what would this be used for?


Adriano



Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel





Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Performance - 2.5 vs 3.0 vs 4.0

2019-01-24 Thread Slavomir Skopalik

V3 super server should be compared with V2.5 super classic.

Slavek




2.5 - 10:55
3.0 -  8:11
4.0 -  8:41

Same script with numbered(1..5) table names into one database 
concurrently:


2.5 - 10:51 - 336.72 MB
3.0 -  8:09 - 329.79 MB
4.0 -  8:45 - 329.79 MB


i.e. v3 finally becomes faster than v2.5, but v4 is slower than v3 
(while faster than v2.5)


Correct?


Dmitry


Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel






Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-5958) FB 2.5.8 hangs

2018-10-31 Thread Slavomir Skopalik (JIRA)
FB 2.5.8 hangs
--

 Key: CORE-5958
 URL: http://tracker.firebirdsql.org/browse/CORE-5958
 Project: Firebird Core
  Issue Type: Bug
  Components: Engine
Affects Versions: 2.5.8
 Environment: Windows server 2008, 64 bits
Reporter: Slavomir Skopalik


Firebird hangs (stop responding to any client), utilize one CPU core.
Dump file created, ask me privately .

Call stack of thread that consume one core:
ntoskrnl.exe!IoAcquireRemoveLockEx+0xe7
ntoskrnl.exe!memset+0x22a
ntoskrnl.exe!KeWaitForMutexObject+0x2cb
ntoskrnl.exe!KeDetachProcess+0x1225
ntoskrnl.exe!PsReturnProcessNonPagedPoolQuota+0x3b3
ntoskrnl.exe!CcSetDirtyPinnedData+0x433
fb_inet_server.exe+0x239a10
fb_inet_server.exe+0x1c0e1e
fb_inet_server.exe+0x1c2105
fb_inet_server.exe+0x1c2493
fb_inet_server.exe+0x1ba506
fb_inet_server.exe+0xbcb88
fb_inet_server.exe+0xbeeb6
fb_inet_server.exe+0xc1d87
fb_inet_server.exe+0xc274e
fb_inet_server.exe+0x1d4cd2
fb_inet_server.exe+0x1d5088
fb_inet_server.exe+0x1d5c2f
fb_inet_server.exe+0xd14dc
fb_inet_server.exe+0xd3119
fb_inet_server.exe+0xd31f3
fb_inet_server.exe+0xd0555
fb_inet_server.exe+0x5a889
fb_inet_server.exe+0x250896
fb_inet_server.exe+0x4faa5
fb_inet_server.exe+0x20f5e
fb_inet_server.exe+0x356f33
fb_inet_server.exe+0x35d6ed
fb_inet_server.exe+0x35df11
fb_inet_server.exe+0x7498d
MSVCR80.dll!endthreadex+0x47
MSVCR80.dll!endthreadex+0x104
kernel32.dll!BaseThreadInitThunk+0xd
ntdll.dll!RtlUserThreadStart+0x21


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Add Server_Name into MON$DATABASE

2018-08-12 Thread Slavomir Skopalik

Hi all,

could I add a ticket for extension MON$DATABASE by Server_Name or 
similar with same meaning?


Normally I used my UDF function for that, but it is not so general.

Current UDF implementation use JclSysinfo:

function GetLocalComputerName: string;
// (rom) UNIX or LINUX?
{$IFDEF LINUX}
var
  MachineInfo: utsname;
begin
  uname(MachineInfo);
  Result := MachineInfo.nodename;
end;
{$ENDIF LINUX}
{$IFDEF MSWINDOWS}
var
  Count: DWORD;
begin
  Count := MAX_COMPUTERNAME_LENGTH + 1;
  // set buffer size to MAX_COMPUTERNAME_LENGTH + 2 characters for safety
  { TODO : Win2k solution }
  SetLength(Result, Count);
  if GetComputerName(PChar(Result), Count) then
    StrResetLength(Result)
  else
    Result := '';
end;

Slavek

--
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Max precision of CURRENT_TIME(p) and CURRENT_TIMESTAMP(p)

2018-03-24 Thread Slavomir Skopalik

Hi, about PostgeSQL you can look here:

https://www.postgresql.org/docs/9.1/static/datatype-datetime.html

Some details about time in firebird:

Now using 30 bits from 32 available bits for time.

Max value is 864 000 000-1, I'm not sure how firebird play with leap second.

Postgre using in time stamp 5 bits for time (see limited range for date 
in compare with pure date).


This mean 37 bits that allow to measure with microsecond precision.

In Firebird, time accuracy can be extended 4 times (2 bits are spare) 
without changing date part, but will needs new datatype on API level.


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 24.3.2018 10:19, Dmitry Yemanov wrote:

24.03.2018 11:04, Mark Rotteveel wrote:

Why is the maximum precision of CURRENT_TIME(p) and 
CURRENT_TIMESTAMP(p) three (3) and not four (4)? The underlying data 
type has a precision up to 100 microseconds.


IIRC (but I may be wrong, it was lots of time ago): we cannot provide 
microsecond precision (only 4 digits are possible) and millisecond 
precision was easier to explain than 1/10 millisecond precision.


That said, I see no problems changing MAX_TIME_PRECISION to 4. Even 
better would be to support microseconds, but this would require a new 
underlying datatype. If I'm not mistaken, PostgreSQL uses 8-byte 
timestamp storage and provides microsecond precision, but I suppose 
their supported range of dates somewhat differs.



Dmitry

-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-5619) Sleep function into fbudf

2017-09-20 Thread Slavomir Skopalik (JIRA)
Sleep function into fbudf
-

 Key: CORE-5619
 URL: http://tracker.firebirdsql.org/browse/CORE-5619
 Project: Firebird Core
  Issue Type: New Feature
  Components: UDF
Affects Versions: 4.0 Alpha 1, 3.0.2, 2.5.7
 Environment: Any
Reporter: Slavomir Skopalik
Priority: Trivial


Add Sleep function as common fbudf function.
This is helpful during obtaining lock or testing.
Declaration like:
DECLARE EXTERNAL FUNCTION Sleep
INTEGER NULL
RETURNS INTEGER BY VALUE
ENTRY_POINT 'Sleep' MODULE_NAME 'masaudf';

Implementation (in pascal):
function Sleep(time:PINT):integer; cdecl;
begin
  result:=0;
  if time=nil then exit;
  windows.Sleep(time^);
end;


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-5616) Cannot drop procedure if using a system domain (RDB$Field_name)

2017-09-18 Thread Slavomir Skopalik (JIRA)
Cannot drop procedure if using a system domain (RDB$Field_name)
---

 Key: CORE-5616
 URL: http://tracker.firebirdsql.org/browse/CORE-5616
 Project: Firebird Core
  Issue Type: Bug
  Components: Engine
Affects Versions: 2.5.7
 Environment: FB 2.5.7 x64 
Reporter: Slavomir Skopalik
Priority: Minor


Simple test case:

SET TERM ^;
CREATE PROCEDURE A(A RDB$Field_Name)
AS
BEGIN
END
^
SET TERM ;^
COMMIT;

DROP PROCEDURE A;
COMMIT;

Cannot commit transaction:
can't format message 13:393 -- message file C:\WINDOWS\firebird.msg not found.
unsuccessful metadata update.
cannot delete.
DOMAIN RDB$FIELD_NAME.
there are 21 dependencies.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] FB 2.5.7 frozen

2017-09-03 Thread Slavomir Skopalik

Hi all,

one of my FB 2.5.7 windows x64 periodically frozen (don't accept new 
connection).


More info, there are large number of rollbacks (about ~200 000 per day).

I have dump file from this situation, If anybody is interesting in this, 
I can share it.


When it happens again, we will try 2.5.8 snapshot and I will share result.

Slavek

--
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-28 Thread Slavomir Skopalik
But this mean to have some global dispatcher that will die as last 
object during termination process.


This is far from clean programing.
And by the way, Embarcadero in delphi IBX code don't counted with that.

Slavek


28.08.2017 11:09, Paul Reeves wrote:

The event listener should really be running in its own thread.


  Actually, callback is already called in its own thread. If a 
programmer is aware of that and coded it properly - there is no 
problem with extra calls.






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-28 Thread Slavomir Skopalik

Thread is created inside gds32.dll.

Slavek


Maybe I'm missing something, but I seem to recall from the code that
you posted that your listener is running on a timer in the main thread.
The event listener should really be running in its own thread.

Have you tried that?


Paul




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-28 Thread Slavomir Skopalik




On 28.08.2017 04:15, Slavomir Skopalik wrote:

Should I create ticket for this?



No. Removing that call will break old code.


You mean, old code will not crash so often.
How much time I have to wait to be sure that no more call back?
Or is there other technique how to solve it?



From programing theory, no call back after successful cancel event is 
allowed.


To be precise - it happens during event cancellation.


This is not true, call back happen in normal case few ms after 
cancellation returns.

But on heavily loaded system, it can be few seconds.

Slavek



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-27 Thread Slavomir Skopalik

Should I create ticket for this?

From programing theory, no call back after successful cancel event is 
allowed.

Slavek


Hi all,

I'm testing Firebird-3.0.3.32798-0_Win32 client and I found strange 
behavior when I canceling event


by isc_cancel_events.

I supposed that after that call, no call back happen, but in reality 
it is happen.


Is it correct?



One callback always happens after canceling events - it was API 
behavior since IB times. It's supposed you are closing your events' 
listener on that callback.




-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-25 Thread Slavomir Skopalik

Not exactly,

procedure EventCallback(P: Pointer; Length: Short; Updated: PByte); cdecl;
var e:TFBEvents;
begin
  if (Assigned(P) and Assigned(Updated)) then begin
    e:=TFBEvents(p);
    try
  e.FSO.Enter;
  e.EventsReceived := true;
  if e.ResultBuffer<>nil then
    Move(Updated[0], e.ResultBuffer[0], Length);
    finally
  e.FSO.Leave;
    end;
  end;
end;

And event is firing on timer:
procedure TFBEvents.OnTimerTimer(Sender: TObject);
var
  i: Integer;
  bt : TBytes;
  FCancelAlerts : Boolean;
begin
  FSO.Enter;
  try
    FDatabase.GDSLibrary.isc_event_counts(@FStatus, EventBufferLen, 
EventBuffer, ResultBuffer);

    if Assigned(FOnEventAlert) and (not FirstTime) then begin
  FCancelAlerts := false;
  for i := 0 to FEvents.Count - 1 do begin
    if (FStatus[i] <> 0) then begin
  bt := FEvents[i];
  FOnEventAlert(self, TEncoding.ANSI.GetString(bt, 0, 
Length(bt) - 1), FStatus[i], FCancelAlerts)

    end;
  end;
    end;
    FirstTime := false;
  finally
    FSO.Leave;
  end;
  SQueEvents;
end;

Timer is in this example sets to 20ms.
And one of Event causes event unregistration.

Slavek

One callback always happens after canceling events - it was API 
behavior since IB times. It's supposed you are closing your events' 
listener on that callback.




-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-25 Thread Slavomir Skopalik

But I'm talking about fbclient.dll.

It is problematic, because in that concept mean, one times when you give 
a call back address.


fbclient can call any time during application run independently to 
isc_cancel_event.


I did see any problem with second channel. In normal model when I 
canceling some async operation,


I in critical section invalidate call backs, wait for thread finish and 
that's all.


From my point of view, I use fbclient wrong way or there is bug in 
fbclient.


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 25.8.2017 12:42, Jiří Činčura wrote:

I supposed that after that call, no call back happen, but in reality it
is happen.

Is it correct?

I'd say it is (I have the same in FirebirdClient during tests under
load) and even understandable. Given by design it's 2nd "channel" and
the processing between regular connection and aux connection is not
synchronous. Well, at least on such assumption I've built it in
FirebirdClient.





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] fbclient - Event call back is called after isc_cancel_events

2017-08-25 Thread Slavomir Skopalik

Hi all,

I'm testing Firebird-3.0.3.32798-0_Win32 client and I found strange 
behavior when I canceling event


by isc_cancel_events.

I supposed that after that call, no call back happen, but in reality it 
is happen.


Is it correct?

Slavek

--
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-5597) Failing isc_que_events causes that other operations with DB on same handle are blocked

2017-08-24 Thread Slavomir Skopalik (JIRA)
Failing isc_que_events causes that other operations with DB on same handle are 
blocked
--

 Key: CORE-5597
 URL: http://tracker.firebirdsql.org/browse/CORE-5597
 Project: Firebird Core
  Issue Type: Bug
  Components: API / Client Library
Affects Versions: 2.5.7
 Environment: Windows, DelphiXE7, simplified (no thread) implementation 
of IBEvent
Reporter: Slavomir Skopalik
Priority: Minor


32 bits gds32.dll from 2.5.7
In case that connection to DB was successfully but auxiliary event connection 
failed to establish, other operations on same DB handle are  blocked.

Call of isc_cancel_events not help.

If I use fbclient.dll from Firebird-3.0.3.32798-0_Win32, it is working fine.




-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-14 Thread Slavomir Skopalik

The situation is clear.

We have to support FB2.5 because of customer, we don't have a choice -> 
our code must be compatible with 2.5.

And I know about other companies that are is same situation.

If somebody will create FB3 like version, it will be welcome.

Will someone do that?

Slavek





Discussion was about naming convention.



Not only. We have reasonable suggestion from Adriano to create package 
instead artificial prefixes.






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-14 Thread Slavomir Skopalik


BTW - why not use snapshots? Currently they pass same tests control as 
releases. Review tests result and if nothing critical present - fill 
free to work with snapshot. For stable releases snapshots are pretty 
stable.


I'm not so brave to install snapshot into production environment.



Now we using FB2.5 for production and development. For testing we are 
using FB3.


Back to original topic, I don't believe that is right time to drop 
support for FB2.5.


Nobody suggest that. But please distinguish between support and new 
features. Developing new features limited with old version is not good 
idea.


Discussion was about naming convention.

There are three things:
1. Already developed (we have more, but have to be polished before commit)
2. Will be developed but for FB2.5 (sorry, but I will not deploy 
snapshot to customers)

3. FB3 related things

I don't see here any problem.
Who have a SQL already developed SQL for FB2.5, may add it (must be 
compatible with FB3).
Who needs something new for FB2.5, also can add it (also must be 
compatible with FB3.
Who already using FB3 can use use FB2.5 or can add FB3 related (without 
FB2.5 compatibility).


Slavek




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-14 Thread Slavomir Skopalik

May be we can start new topic about bug fixing and release strategy.
Two examples:

1. Database corruption (http://tracker.firebirdsql.org/browse/CORE-5392) 
that I was reported 06/Nov/16, release contain this fix was available 
22-Mar-2017
2. Server crash (http://tracker.firebirdsql.org/browse/CORE-5562) that 
was reported 07/Jun/17 will be released Q3-Q4 2017


Because of 1 we have to downgrade to 2.5 and till now, no stable release 
available.


Now we using FB2.5 for production and development. For testing we are 
using FB3.


Back to original topic, I don't believe that is right time to drop 
support for FB2.5.


But library can contain FB3 specific things, contributors are welcome.

Slavek

On 14.8.2017 14:29, Alex via Firebird-devel wrote:

On 14.08.2017 15:21, Slavomir Skopalik wrote:

Hi Adriano, we have serious problems with FB3 with stability.


What problems? They are to be discussed here if one wants them to be 
solved.




-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdotFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-14 Thread Slavomir Skopalik

Hi Adriano, we have serious problems with FB3 with stability.
And I have info, that also other companies has similar problems.

That is the reason to develop library for 2.5 with compatibility with FB3.

Slavek



My opinion is to forget FB 2.5 and create a package instead of prefix
every object.


Adriano






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-13 Thread Slavomir Skopalik

Thank you, good to know.

But compatibility with 2.5 must be (we cannot use 3.0.2 because lack of 
stability).

How to solve in both version?
Or I can prepare windows bat file that will call isql.

Slavek


Em 13/08/2017 14:49, Slavomir Skopalik escreveu:

3. The best will be, if each SP (or function or ...) will be in separate
file and there will be one file that will cover (INPUT) all of them.

But isql is not friendly because if you are using relative path, this
path is from current isql directory not to file with INPUT command.

Is there any work around for this?

Since FB 3, it should work as you like:

http://tracker.firebirdsql.org/browse/CORE-2575


Adriano

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-13 Thread Slavomir Skopalik

Hi Paul,

I created https://github.com/skopaliks/Firebird-SQL-Lib

but I have some stupid questions:

1. What kind of license will be best for this project?

I chose LGPL3, is it ok?

2. What prefix do you prefer for this?

I prefer LIB$, like LIB$CheckTriggerUniquePostion.

Any other idea?

3. The best will be, if each SP (or function or ...) will be in separate 
file and there will be one file that will cover (INPUT) all of them.


But isql is not friendly because if you are using relative path, this 
path is from current isql directory not to file with INPUT command.


Is there any work around for this?

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 1.8.2017 13:13, Paul Reeves wrote:

On Tue, 1 Aug 2017 11:49:42 +0200 Slavomir Skopalik wrote


Hi,

will be possible to extend firebird installation by SQL stored
procedures that will solve common problems?

It will be like UDF (let say in directory SQL), and every one can
include/use this SQL like fbudf.



It is a great idea. I'm sure I'm not the only one to have developed a
few SPs over the years that might be useful. However, it needs
someone to organise it, curate the contents and create QA tests to
verify that the stored procedures work correctly with each new release.
It is not too much work, but it does require commitment if the
intention is that these procedures should ship with Firebird. Bit rot
can set in very quickly if no-one is around to maintain code.

Perhaps the first step would be to create a github project to allow
people to contribute.


Paul





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Useful SQL Stored Procedures as part of standard firebird installation

2017-08-01 Thread Slavomir Skopalik

Hi,

will be possible to extend firebird installation by SQL stored 
procedures that will solve common problems?


It will be like UDF (let say in directory SQL), and every one can 
include/use this SQL like fbudf.


Examples of useful:
SP that returns complete metadata for another SP.
SP that set NULL/NOT NULL for column and works on FB2.5 and FB3.
Some examples from our development:

-- Drop Primary Key for given table
CREATE OR ALTER PROCEDURE MASA$DropPrimaryKey(Relation RDB$Relation_Name)
AS
DECLARE VARIABLE cn VARCHAR(500)=NULL;
BEGIN
  SELECT TRIM(rc.rdb$constraint_name)
FROM rdb$Relation_Constraints RC LEFT JOIN RDB$Indices I ON 
RC.rdb$Index_Name=I.rdb$Index_Name
WHERE I.rdb$Relation_Name=:Relation AND 
RC.rdb$Constraint_Type='PRIMARY KEY'

INTO :cn;
  IF (cn IS NOT NULL) THEN BEGIN
SQL='ALTER TABLE ' || TRIM(Relation) ||' DROP CONSTRAINT '||cn||';';
EXECUTE STATEMENT SQL;
  END
END

Slavek

--
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Changing IRoutineMetadata in Plugin::makeProcedure

2017-04-18 Thread Slavomir Skopalik
Hi Jiri,
you must decode input UTF8 buffer into string(probably unicode) and next 
you have to trim trailing spaces.
For text, just put some non US characters into text.

Slavek

> The length from metadata is 80. The memory dump where the pointer points
> is:
> { 80, 0, 49, 50, 51, 52, 53, 54, 55, 56, 57, 48, 49, 50, 51, 52, 53, 54,
> 55, 56, 57, 48, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,
> 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,
> 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,
> 32, 32, 32, 32, 32, 32, 32, 32, 32 }
>
> Which clearly shows the first two bytes are reporting 80 as well. Fair
> enough. Reading that buffer as UTF8 (or just looking at it, is basically
> plain US-ASCII, nothing fancy) you get
> "12345678901234567890"
> (I replaced the spaces with visible character). It's clearly correct
> string at the beginning and then 60(!) spaces. And that doesn't look
> correct to me at all.
>
> I think that's what Dimitry S. was pointing to.
>
> With all that, unless I'm doing something wrong, I'd need to do some
> trimming (or forgot about CHARs at all). The conversion to SQL_VARYING
> gets me only so far.
>
>> This change from SQL_TEXT->SQL_VARYING will not work only in some old
>> versions (maybe firsts 2.1.x or 2.0.x).
> Fine for me, as external engine needs 3.0 anyway.
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] FB2.5.6 - '' = ' ' is evaluated as true

2016-12-07 Thread Slavomir Skopalik
Hi,

is it correct that empty string '' in comparison with one space string ' 
' is evaluated as true?

SELECT * FROM rdb$database WHERE ''=' '

FB 2.5.6, database dialect 1

Slavek


-- 
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/xeonphi
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-5392) internal Firebird consistency check (decompression overran buffer (179), file: sqz.cpp line: 282)

2016-11-06 Thread Slavomir Skopalik (JIRA)
internal Firebird consistency check (decompression overran buffer (179), file: 
sqz.cpp line: 282)
-

 Key: CORE-5392
 URL: http://tracker.firebirdsql.org/browse/CORE-5392
 Project: Firebird Core
  Issue Type: Bug
  Components: Engine
Affects Versions: 3.0.1
 Environment: Windows 10 64bit, Firebird 3.0.1.32609 official build 
super server, connection to DB by TCP localhost looback.
Reporter: Slavomir Skopalik
Priority: Critical


This happen in this situation (verified 4 times):
1. DB was backup on FB 2.5 W64 and restored under FB3.0.1
2. DB upgrade scripts ran against this DB
3. Last script contain high (~1M) of updates inserts and finally failed on 
custom exception
  exception 27
-EEXCEPTION
-Unsupported combination idDefectType:1001 idDevice:1 idState:1006 
idScrapReason:1000
-At procedure 'KGS_NEWCUTTYPE' line: 57, col: 27

4. Next run of same script (no touch any other utilities)
D:\Dokumenty\kingspan\sql\upgrade>c:\fb\isql.exe -b -e -q -charset UTF8 -u 
sysdba -p masterkey -i 099_CutTypes_remap.inc 
localhost:d:\fbdata\kingspan_hu_prod.fdb  1>>_test.sql
Statement failed, SQLSTATE = 40001
lock conflict on no wait transaction
-At procedure 'KGS_NEWCUTTYPE' line: 61, col: 5
After line 26 in file 099_CutTypes_remap.inc

But report from MON$Attachments is (that is part of this script):
MON$STATE   MON$ATTACHMENT_NAME MON$USERMON$ROLE
MON$REMOTE_PROTOCOL

0   D:\FBDATA\KINGSPAN_HU_PROD.FDB  Cache   Writer  
0   D:\FBDATA\KINGSPAN_HU_PROD.FDB  Garbage Collector   
1   D:\FBDATA\KINGSPAN_HU_PROD.FDB  SYSDBA  NONETCPv6


4. When I run last script again I receive this
D:\Dokumenty\kingspan\sql\upgrade>c:\fb\isql.exe -b -e -q -charset UTF8 -u 
sysdba -p masterkey -i 099_CutTypes_remap.inc 
localhost:d:\fbdata\kingspan_hu_prod.fdb  1>>_test.sql
Statement failed, SQLSTATE = XX000
internal Firebird consistency check (decompression overran buffer (179), file: 
sqz.cpp line: 282)
After line 26 in file 099_CutTypes_remap.inc

Database is in dialect 1

FB log contains:
CORE-I7-6700K   Sun Nov 06 13:03:20 2016
Database: D:\FBDATA\KINGSPAN_HU_PROD.FDB
internal Firebird consistency check (decompression overran buffer 
(179), file: sqz.cpp line: 282)

After that, FB service cannot be stopped, must be killed by ProcessExplorer.



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] DPM.epp - DPM_store

2016-10-14 Thread Slavomir Skopalik
Hi,

I would like to confirm that I understand this simplified code properly:

If record is fragmented then always have 48 bit transaction id.

If not fragmented and transaction ID is less then 4G than record can be 
padded by zeroes and this zeros will come to compressor as regular part 
of compressed record.

Am I right?

Slavek

void DPM_store( thread_db* tdbb, record_param* rpb, PageStack& stack, 
const Jrd::RecordStorageType type)
{
/**
  *
  *D P M _ s t o r e
  *
  **
  *
  * Functional description
  *Store a new record in a relation.  If we can put it on a
  *specific page, so much the better.
  *
  **/
 SET_TDBB(tdbb);
 Database* dbb = tdbb->getDatabase();

 const Compressor dcc(*tdbb->getDefaultPool(), rpb->rpb_length, 
rpb->rpb_address);
 const ULONG size = (ULONG) dcc.getPackedLength();

 const FB_SIZE_T header_size = (rpb->rpb_transaction_nr > MAX_ULONG) 
? RHDE_SIZE : RHD_SIZE;

 SLONG fill = (RHDF_SIZE - header_size) - size;
 if (fill < 0)
 fill = 0;

 // Accomodate max record size i.e. 64K
 const SLONG length = header_size + size + fill;
 rhd* header = locate_space(tdbb, rpb, (SSHORT) length, stack, NULL, 
type);

 header->rhd_flags = rpb->rpb_flags;
 Ods::writeTraNum(header, rpb->rpb_transaction_nr, header_size);
 header->rhd_format = rpb->rpb_format_number;
 header->rhd_b_page = rpb->rpb_b_page;
 header->rhd_b_line = rpb->rpb_b_line;

 UCHAR* const data = (UCHAR*) header + header_size;

 dcc.pack(rpb->rpb_address, data);

 if (fill)
      memset(data + size, 0, fill);

-- 
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] ODP: FB3.0.1 - Impossible to alter table

2016-10-13 Thread Slavomir Skopalik
Hi,

I was tested FB3.0.1 Windows 64 with different result.

DaviceData is table with several millions of rows.

alter table DeviceData add a integer default 7 not null;

This is fast, just few ms.

But commit was taken several minutes with high disk activities.

Also I made another test.

1. alter table with default.

2. commit

3. drop default

4. commit

5. select and you will that new column has default value but metadata is 
without default.

 From my point of view is as should be because in other way you will 
have inconsistent DB like in FB2.5 after add not null column.

Slavek

>> Big disadvantages of default is:
>>
>> 1. Immediately update all rows in table.
>>
>> Interesting only if all or most already existed rows will use this value.
>>
> No. DEFAULT is a constant expression and is stored in table metadata and
> associated with the new table format number. It does not update existing
> rows, so is much better than manually update the table.
>
>
> Adriano
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel
>



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] ODP: FB3.0.1 - Impossible to alter table

2016-10-13 Thread Slavomir Skopalik
Big disadvantages of default is:

1. Immediately update all rows in table.

Interesting only if all or most already existed rows will use this value.

2. Less readable, this is important for me.

I prefer this scenario:

ALTER TABLE ... ADD null able;

COMMIT;

UPDATE ...

COMMIT;

ALTER TABLE ... ALTER ... SET NOT NULL;

COMMIT;

For backward compatibility (currently I must support FB2.5) I using this SP 
instead SET NOT NULL:
CREATE OR ALTER PROCEDURE MASA$Set_Null_Flag(Relation_Name RDB$Relation_Name, 
Field_Name RDB$Field_Name, Not_Null SMALLINT)
AS
DECLARE major INTEGER;
DECLARE ds VARCHAR(500);
DECLARE nf VARCHAR(20);
BEGIN
   
major=COALESCE(SubStrFromStr(rdb$get_context('SYSTEM','ENGINE_VERSION'),'.',0),0);
   IF(major>=3)THEN BEGIN
 nf = ' DROP NOT NULL;';
 IF(Not_Null = 1) THEN nf = ' SET NOT NULL;';
 ds = 'ALTER TABLE '||TRIM(Relation_Name)||' ALTER '||TRIM(Field_Name)||nf;
   END ELSE BEGIN
 nf = 'NULL';
 IF(Not_Null = 1) THEN nf = '1';
 ds = 'UPDATE RDB$RELATION_FIELDS SET RDB$NULL_FLAG = '||nf||' WHERE 
RDB$FIELD_NAME='''||TRIM(Field_Name)||''' AND 
RDB$RELATION_NAME='''||TRIM(Relation_Name)||''';';
   END
   EXECUTE STATEMENT ds WITH AUTONOMOUS TRANSACTION;
END

Slavek

On 13.10.2016 17:49, Jiří Činčura wrote:
>> 1.add field with "default"
> Which you can later remove.
>



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] ODP: FB3.0.1 - Impossible to alter table

2016-10-12 Thread Slavomir Skopalik
Hi,

correct sequence in this case is:

ALTER TABLE Defects ADD idDefectType TLongInt;

COMMIT;

UPDATE ...

COMMIT;

ALTER TABLE Defects ALTER idDefectType SET NOT NULL;

COMMIT;

The commits are necessary.

Slavek


>  
> hi,
> yes it is expected. you must:
> 1.add field with "default"
> or
> 2. add field without not nullupdate table and set that field valueafter that 
> change that field to not null
> regards,Karol Bieniaszewski
>
> ---- Oryginalna wiadomość 
> Od: Slavomir Skopalik <skopa...@elektlabs.cz>
> Data: 12.10.2016  02:32  (GMT+01:00)
> Do: For discussion among Firebird Developers 
> <firebird-devel@lists.sourceforge.net>
> Temat: [Firebird-devel] FB3.0.1 - Impossible to alter table
>
> Hi all,
>
> If I have a table that contains some rows, in FB3.0.1 is not possible to
> add new NOT NULL column.
>
> Example:
>
> ALTER TABLE Defects ADD idDefectType TLongInt NOT NULL;
>
> COMMIT;
>
> Cannot commit transaction:
> unsuccessful metadata update.
> Cannot make field IDDEFECTTYPE of table DEFECTS NOT NULL because there
> are NULLs present.
>
> If I will try to set a value it will causes:
>
> ALTER TABLE Defects ADD idDefectType TLongInt NOT NULL;
> update Defects SET idDefectType=0;
>
> Dynamic SQL Error.
> SQL error code = -206.
> Column unknown.
> IDDEFECTTYPE.
> At line 1, column 20.
>
> Is it bug or is it expected?
>
> Slavek
>
>



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] FB3.0.1 - Impossible to alter table

2016-10-11 Thread Slavomir Skopalik
Hi all,

If I have a table that contains some rows, in FB3.0.1 is not possible to 
add new NOT NULL column.

Example:

ALTER TABLE Defects ADD idDefectType TLongInt NOT NULL;

COMMIT;

Cannot commit transaction:
unsuccessful metadata update.
Cannot make field IDDEFECTTYPE of table DEFECTS NOT NULL because there 
are NULLs present.

If I will try to set a value it will causes:

ALTER TABLE Defects ADD idDefectType TLongInt NOT NULL;
update Defects SET idDefectType=0;

Dynamic SQL Error.
SQL error code = -206.
Column unknown.
IDDEFECTTYPE.
At line 1, column 20.

Is it bug or is it expected?

Slavek


-- 
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] FB 3.0.1 - Different concatenation result to FB 2.5.6

2016-10-10 Thread Slavomir Skopalik
Hi Alex,
I'm not worry about 2.5. I was just put here for info.

Slavek

PS: Spaces was deleted by HTML in E-mail :(
> T2
> ==
> T1 _SUFIX
> T1234567890_SUFIX
>
> where _SUFIX starts exactly at position == 31 (see message text with
> monospace font to make sure), i.e. after the spaces present in the end
> of CHAR(31) field RDB$RELATION_NAME. I'm not sure we will fix it in 2.5.
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel
>



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] FB 3.0.1 - Different concatenation result to FB 2.5.6

2016-10-10 Thread Slavomir Skopalik
Hi,

I found different behavior of concatenation between FB2.5.6 and FB 3.0.1.

FB 2.5.6:

EXECUTE BLOCK RETURNS(T2 VARCHAR(70)) AS
DECLARE T1 RDB$RELATION_NAME = 'T1';
BEGIN
   T2 = T1 || '_SUFIX';
   SUSPEND;
   T1 = 'T1234567890';
   T2 = T1 || '_SUFIX';
   SUSPEND;
END

T2
=== 


T1 _SUFIX
T1234567890_SUFIX

FB 3.0.1

EXECUTE BLOCK RETURNS(T2 VARCHAR(70)) AS
DECLARE T1 RDB$RELATION_NAME = 'T1';
BEGIN
   T2 = T1 || '_SUFIX';
   SUSPEND;
   T1 = 'T1234567890';
   T2 = T1 || '_SUFIX';
   SUSPEND;
END

T2
==

T1 _SUFIX
T1234567890 _SUFIX

Slavek

-- 
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-05-01 Thread Slavomir Skopalik

Hi Dmitry,

I will send latest version, that have some improvements.

In any case, feel free to contact privately (E-mail or skype).

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 1.5.2016 8:56, Dmitry Yemanov wrote:

Slavomir,


is it good time (V4) to commit new record level compression?

http://elektlabs.cz/fbrle/

Is this download the latest version?
http://elektlabs.cz/fbrle/FirebirdWin64_ElektLabs.zip


If yes, will you need my help for this task?




--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;zFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-04 Thread Slavomir Skopalik
Hi,
to be able to do it, I need to switch to record level compression.
And for this task I need your help.

For first, you can use a memory move instead compression, I will replace 
by ELekt Labs RLE (+LZ4) for fist iteration.
Next I will change to value encoding or value encoding + LZ4 (or LZ4 HC).

Finally you will can chose between RLE and value encoding.

Is it OK for you?

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 4.3.2016 21:27, Dmitry Yemanov wrote:
> 04.03.2016 18:49, Slavomir Skopalik wrote:
>> if you help me with integration, will do it.
> I will. Just consider it a trial development. I cannot promise anything
> before seeing the test results.
>
>
> Dmitry
>
>
> --
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel
>



--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-04 Thread Slavomir Skopalik
Hi Dmitry,
if you help me with integration, will do it.

Slavek

> I have nothing against a value-based encoding. But our v4 development
> tasks currently do not include inventing one. If someone jumps in with a
> prototype implementation, I'd be definitely willing to review/consider it.
>
>
> Dmitry
>
>
> --
>



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-03 Thread Slavomir Skopalik
1. Data can (already did) at any point, not only at specific, that you 
marks by '!'.
2. If packed record doesn't fit into one page, control sequence is 
parsed from end.
3. When rest of sequence will fit, compression is started again from 
beginning.
4. If fragment has odd length, zero is added.

if you will generate compressed buffer instead control sequence, you can use
any compress/encoding schema.
If you will store this compressed buffer in one or more fragments is it 
irrelevant.
You can join/splitting as you want. But for decompression  you have to 
collect all fragments into buffer.

Slavek

> 03.03.2016 18:34, Slavomir Skopalik wrote:
>> But value encoding cannot be implemented until we switch from fragment
>> compression to true record level compression.
> I was not speaking about any encoding, just about compacting the record.
>
> Fragments are not compressed independently. The whole record is being
> prepared for compression (control bytes are generated) and then splitted
> into multiple chunks accordingly to the control bytes, so that a single
> compressed sequence is not interrupted in the middle:
>
> 'oooiiiuee'
> {-4, 'a'} ! {-3, 'o'} ! {-3, 'i'} ! {1, 'u'} ! {2, 'ee'}
>
> This compressed sequence can be fragmented at any of the "!" points.
> This is needed to decompress them separately, without copying all the
> fragments into a single buffer before decompressing, i.e. fragment by
> fragment.
>
> But for the pack/unpack approach, we just copy data in chunks. Some
> field may have first N bytes stored in fragment 1 and the remaining M
> bytes stored in fragment 2. We know we need N+M bytes for the record. So
> we copy N bytes from fragment 1, release the page, fetch fragment 2 and
> continue copying N+1..M bytes into the record. Fragmenting is even
> easier than now, it can happen at any place.
>
> Am I missing something obvious?
>



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-03 Thread Slavomir Skopalik
Hi Jim,
Elekt Labs RLE is ready to use and is better then current one.

The best one have to be developed and I will help if I can.

The question is still same: Is it right time or will be postponed to V5 
(2024+) ?

Slavek

>
> Excuse me, but faster than the current one is not the question. The 
> question is what is the best one.



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-02 Thread Slavomir Skopalik
Hi,
this is about record encoding .
My private test with RLE + LZ4 shows, that from combination RLE, LZ4, 
RLE + LZ4 the last is the best.
I believe, that better record encoding will help wire transfer in any case.

 From other side, record encoding can be used in all situation (in 
average speed can be around 10-30% of memory move)
but zlib over local network can be limitation.

Slavek

> With the zlib support in the v13 wire protocol, I am not sure another
> layer of compression in the protocol is a good idea. If it happens
> though, I would really appreciate clear specifications outside the code.
>
> Mark



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compression for V4

2016-03-02 Thread Slavomir Skopalik
Hi Jim,
I know, but in current situation is little problematic.

Current compression is doing on record fragment, not on full record.
I was hacked Elekt Labs RLE to be able work in this situation.
My tests show big advantage compared to current V3 RLE.

If we switch to real record level compression/encoding this will generate
significant performance boost for encoding/decoding.
To do this I will need some help from community to find right place for 
compression/decompression.

When we have record compression ready, it can be used not only for 
remote, but also for temp spaces.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 2.3.2016 17:52, Jim Starkey wrote:
> On 3/2/2016 10:08 AM, Slavomir Skopalik wrote:
>> Hi,
>> is it good time (V4) to commit new record level compression?
>>
>> http://elektlabs.cz/fbrle/
>>
>> If yes, will you need my help for this task?
>>
>> Slavek
>>
> There are other encoding/compression schemes that need to be considered,
> in particular, value based encoding, which would also work very well in
> the remote protocol.
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel
>



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Enhancement for numbers calculations

2015-12-07 Thread Slavomir Skopalik

> SS> Yes, but now you have to told to FB explicitly which part of number have
> SS> to be discarded.
>
> And you still could do this, as you wish :) You remain with full
> control, if/when you prefer to have it. I guess the proposed
> enhancement will be useful for 99% of the cases, when user don't care
> about loosing some accuracy in the middle of the calculation, but
> still cares about accuracy of stored data (what you store is exactly
> what you get when you retrieve it). For the other 1%, user still has
> full control doing his own casts as desired.
OK, can you make example how engine (with you extension) will handle this:

select 1.0 / 1.00 from rdb$database

and in same time this

select 0.1 / 9.00 from rdb$database

and same time this

select 1.1 / 0.01 from rdb$database

>
> SS> Can you give me real example with decimal casting?
>
> What do you mean?

I don't understand your situation from practical point of view.

If you can accept losses during computing, why can't you accept DOUBLE 
PRECISION?

I will be happy, if firebird will support int128, NUMERIC up to scale 31++

and many more :).

Slavek



--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Enhancement for numbers calculations

2015-12-07 Thread Slavomir Skopalik

> SS> 1. Decimal or Numeric is used for to keep exact accuracy.
> SS>If you don't need this accuracy use float point data types.
>
> You didn't get the point. There is no loss of accuracy. The data types
> will continue working as is. Remember, as I said, today you already have to 
> cast
> if you want to make the formula work, so, you are already "loosing"
> digits.
Yes, but now you have to told to FB explicitly which part of number have 
to be discarded.

If engine do it implicitly instead of error, you will receive some 
result, but it can be incorrect.
For this purpose (unpredictable rounding errors), use DOUBLE PRECISION 
data type.

Can you give me real example with decimal casting?

> SS> 2. System that will produce unpredictable results in math is really hard
> SS> to use.
> SS>   Some numbers will rounded, truncated or modified.
> SS>   There will lot of risk in financial calculation because real result
> SS> will depend
> SS>   on server config.
>
> What risks? As I said, currently formulas will keep working in the
> exact same way. About server config, personally, I think that
> parameter is not needed.
You write something about hidden conversion from decimal to IEEE (I read 
as IEEE float point).
Some numbers in decimals don't have representation in float point.

See here:
https://en.wikipedia.org/wiki/Floating_point#IEEE_754:_floating_point_in_modern_computers

>
> SS>  From my point of view, this auto casting will generate more problems
> SS> than helps.
>
> Do you have a better solution for the proposed problem?
>

Some notes about float point match and fixed point match:

1. Fixed point is more accurate, it always produce same result on all HW.
   Useful as price, count, taxes ...
2. Float point is more flexible, but low accurate and can produce 
different results on different HW.
   Useful as measured values and common situation where you don't care 
about some errors.
In normal situations, real error is out of scope.
But in some special situation can be relevant f.e. statistical computing.

The final solution will be support for high precision arithmetics.
https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

Slavek




--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Enhancement for numbers calculations

2015-12-07 Thread Slavomir Skopalik
I have several points against this idea:

1. Decimal or Numeric is used for to keep exact accuracy.
   If you don't need this accuracy use float point data types.

2. System that will produce unpredictable results in math is really hard 
to use.
  Some numbers will rounded, truncated or modified.
  There will lot of risk in financial calculation because real result 
will depend
  on server config.

 From my point of view, this auto casting will generate more problems 
than helps.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 7.12.2015 17:21, Carlos H. Cantu wrote:
> I know there are plans to support long numbers in Firebird. I have no
> idea what is the schedule for its implementation or how much this
> subject has been discussed before. Anyway, this email is not to
> discuss long numbers implementation, although the subject can be more
> or less related.
>
> I want to discuss an implementation/enhancement to avoid "unnecessary"
> overflow errors with calculations using currently exact types
> (numeric, decimal - dialect 3).
>
> With currently logic, the result of multiplications or divisions will
> have Scale = Sum of the scale of its members. This cause nonsense
> situations like the following:
>
> select 1.0 / 1.00 from rdb$database
>
> resulting in:
> Arithmetic overflow or division by zero has occurred. arithmetic
> exception, numeric overflow, or string truncation. numeric value is
> out of range.
>
> Even if currently logic is defined by the SQL Standard, for the end
> user, this is usually a pain.
>
> My suggestion is to implement a "smarter" logic to be used in
> calculations, to avoid such glitches whenever it is possible, without
> the need to create new data types.
>
> In short, the idea would be: do the calculation without worrying about
> scale limits, and cast the final result to fit in the desired data
> type.
>
> At a first though, I would propose that for the internal calculation
> (meaning Firebird doing its internal math to give the formula result),
> it should not limit the scale at all. Use the maximum possible scale
> (cast/round/truncate when needed), or maybe use an IEEE format in the
> intermediate calcs, to avoid overflow errors due to scale limit being
> reached. If final value generated a scale that cannot fit in the
> desired field/variable data type, it will be automatically casted to
> it.
>
> For those afraid of "legacy" formulas starting to return different
> results, the parser can be smart enough to apply new logic only when
> needed, otherwise it would use the old (currently) logic. For
> "paranoics", there could be even a parameter in fb.conf to disable new
> logic at all (although, personally, I don't think this is needed).
>
> Currently users already need to use "workarounds" to be able to work
> with those situations, meaning that some degree of accuracy is already
> being lost. Usually, they will split the formula in "groups" and use
> casts. Those "legacy" formulas would still work as designed, producing
> the same result, since parser would use old logic.
>
> Comments? Ideas? Suggestions? Bashing? :)
>
> []s
> Carlos
> http://www.firebirdnews.org
> FireBase - http://www.FireBase.com.br
>
>
> --
> Go from Idea to Many App Stores Faster with Intel(R) XDK
> Give your users amazing mobile app experiences with Intel(R) XDK.
> Use one codebase in this all-in-one HTML5 development environment.
> Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
> http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel
>



--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] The Power of C++11 in CUDA 7

2015-03-23 Thread Slavomir Skopalik
Last time, when I made test with my record level compresion I receive 
this change:

DB size decrese from 90GB - 60 GB.
Some select count(*) from table like this one:

Create Table ProductDataEx  (
 idProduct TLongInt NOT NULL,
 idMeasurand Smallint NOT NULL,
 idMeasurementMode TSmallInt NOT NULL,
 ValIndex Smallint Default 0 NOT NULL,
 idPeople TSmallInt NOT NULL,
 tDate TimeDateFutureCheck NOT NULL,
 Value1 Double precision NOT NULL,
 Description TMemo,
Constraint pk_ProductDataEx Primary Key 
(idProduct,idMeasurand,idMeasurementMode,ValIndex)
);

Decrease from ~150s(any run) - 52s for first run and 36s another run.

If are you interested, I can send you source code or publish compiled 
FB3 for Windows x64.

Slavek

On 22.3.2015 14:21, Thomas Steinmaurer wrote:
 I'm confused. ;-)

 With FB 2.5.2 SC 64-bit on Windows 7 Prof.

 While copying a 18GB database from folder A to B on the same spinning
 physical disk at ~33MB/s read + ~33MB/s write, thus 66MB/s total, doing
 a select count(*) on that database (8K page size) for a table with ~6Mio
 records at a physical disk read rate (according to perfmon) of only
 ~7,8MB/s.

 The system has been freshly rebooted, thus the database is not in the
 file system cache nor in the FB connection page cache.

 A very naive test, but as I would expect a COUNT(*) with cold caches to
 be purely I/O bound, with max. ~7,8MB/s we are far away from nearly
 fully utilizing disk I/O.


 Regards,
 Thomas


 --
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel


Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] The Power of C++11 in CUDA 7

2015-03-23 Thread Slavomir Skopalik
Hi,
windows file compression using LZNT1 
https://msdn.microsoft.com/en-us/library/jj711990.aspx
that is dictionary based compression like LZ4.
It is work in 64kB block that compress into smaller are with some free space 
for update.
But any update is problematic and for MSSQL, Hyper-V and more is prohibited.

Firebird compression (old and new) working on record fragment.
If packed size of fragment doesn't fit into page, firebird put record into 
different area of file
(internally is more complex including page compacting).

With new RLE you will probably receive less compression ratio on windows 
compression
because lot of sequences [-128,0] are eliminated, but I don't recommend windows
compression for live servers (only backups are good idea).

Slavek

 Hi,

 you misunderstood me
 i say that i saw benefits when i apply compression to my own db file at
 windows system level.
 I do not remember numbers exactly - but improvement was significant because
 i had on that system low memory compared to db size itself.
 Memory like 4GB and db size ~90GB

 I am happy that new algorithm is not only better but also faster and low
 memory consumption is needed
 But i am interested if it is also good for writing not only reading.


 regards,
 Karol Bieniaszewski




 --
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel




--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-03-17 Thread Slavomir Skopalik
Hi Alex,
I made full restore of tpcc.fbk with loging of each compresion request.

Result is here:

http://www.elektlabs.cz/tpcc.7z

Record size is original length of request for compresion.

New RLE is RLE that I was developed for FB3, but was rejected.

LZ4 is fast version, not HC, just original data packed only by lz4.

RLE+LZ4 is new RLE and over this result is runned LZ4.

Some notes:
1. tpcc database has big differencies to live DB.
 the biggest record has Average unpacked length: 689.00, compression 
ratio: 1.22
 In UTF8 era one VARCHAR(100) occupied 400 bytes.

2. I don't test speed impact, full integration of lz4 will cause changes 
in vio.cpp and dmp.epp.
   But I still belive, that we have tu put some thashold, by current 
test  it can be around 256 bytes of RLE output.

3. Because encoding and compresion are based on statistical probabilities,
we need some real data for research.

If you are interesting in record encoding/compresion, I'm ready to help.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 17.3.2015 15:24, Alex Peshkoff wrote:
 On 03/17/15 16:50, Slavomir Skopalik wrote:
 Hi Alex,
 please can you take your tpcc.fdb,  backup, 7zip and  send to me?
 I will use as reference database.

 Done.
 Confirm that you've received it pls.


 --
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel




--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-03-17 Thread Slavomir Skopalik
Hi Alex,
please can you take your tpcc.fdb,  backup, 7zip and  send to me?
I will use as reference database.

Thank you.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 16.3.2015 18:33, Alex Peshkoff wrote:
 Test with tpcc.fdb using lz4 command line (not most realistic sample in
 the world, but hopefully more or less OK).
 DB file is 211864K, after compression - 112547K (53%, almost twice).
 0.975s, i.e. about 0.2 Gb/sec.
 Decompression takes 0.275s, i.e. 0.75 Gb/sec.
 In both cases User time is taken, avoiding most of losses for system calls.
 CPU is AMD FX-8120 3.1GHz





--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-03-16 Thread Slavomir Skopalik
Hi Jim,
I made some research about storage compresion and I found this project:

https://code.google.com/p/lz4/

My idea is to use this only if encoded size of record will be more than 
aprox 4Kb.

Do you have any note, why it can be bad idea?

Thanks Slavek

PS: I was made some changes in Firebird to rip out compresor from 
storage engine (and put new RLE in second step, encoding in third step),
but it was rejected by comunity :)

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 1.3.2015 18:55, Slavomir Skopalik wrote:
 Hi Jim,
 my proposal was not so abstract as your.

 I just want to put all parts of encoding/decoding  into one class with
 clear interface that will able
 to put different ecoder in development time (FB3+).

 I will contact Firebird developer to make agreement about changes is
 this class:

 class Compressor : public Firebird::AutoStorage

 If will be posible to have access in record format, it can be easy to create
 self-described encoding.
 I have in my mind a idea of this schema that I woudlike to test it.

 Slavek

 Ing. Slavomir Skopalik
 Executive Head
 Elekt Labs s.r.o.
 Collection and evaluation of data from machines and laboratories
 by means of system MASA (http://www.elektlabs.cz/m2demo)
 -
 Address:
 Elekt Labs s.r.o.
 Chaloupky 158
 783 72 Velky Tynec
 Czech Republic
 ---
 Mobile: +420 724 207 851
 icq:199 118 333
 skype:skopaliks
 e-mail:skopa...@elektlabs.cz
 http://www.elektlabs.cz

 On 28.2.2015 22:43, Jim Starkey wrote:
 OK, I think I understand what you are trying to do -- and please correct me 
 if I'm wrong.  You want to standardize an interface between an encoding and 
 DPM, separating the actual encoding/decoding from the fragmentation process. 
  In other words, you want to compress a record in toto then let somebody 
 else to chop the resulting byte stream to and from data pages.  In essence, 
 this makes the compression scheme plug replaceable.

 If this is your intention, it isn't a bad idea, but it does have problems.  
 The first is how to map a given record to a particular decoding schema.  The 
 second, more difficult, is how to do this without bumping the ODS 
 (desirable, but not essential).  A third is how to handle encodings that are 
 not variations on run length encoding (such as value based encoding).

 If I'm on the right track, do note that the current decoding schema already 
 fits your bill.  Concatenate the fragments and decode.  The encoding 
 process, on the other hand, is more problematic.

 Encoding/decoding in place is more efficient than using a temp, but not so 
 much as to preclude it.  I might be wrong, but I doubt that the existing 
 schema shows up as a hot spot in a profile.  But that said, I'm far from 
 convinced that variations on a run length theme are going to have any 
 significant benefit for either density or performance.

 My post Interbase database systems don't access records on page (NuoDB 
 doesn't even have pages).  Records have one format in storage and others 
 formats in memory within a record class that understands the transitions 
 between formats (essentially doing the various encode and decoding).  There 
 are generally an encoded form (raw byte stream), a descriptor vector for 
 buiding new records, and some sort of ancillary structure for field 
 references to either.

 In my mind, I think it would be wiser to Firebird to go with a flexible 
 record object than to simply abstract the encoding/decoding process.  More 
 code would need to be changed, but when you were done, there would be much 
 less code.

 Architecturally, abstracting encoding/decode makes sense, but practically, I 
 don't it buys much.  A deep reorganzation, I believe, would have a much 
 better long term payoff.

 But then maybe I missed your point...

 Jim Starkey


 On Feb 28, 2015, at 10:30 AM, Slavomir Skopalik skopa...@elektlabs.cz 
 wrote:

 Hi Jim,
 I don't want to change ODS for saving one byte per page.
 I want to change sources to be able implement different
 encoder (put name that you want) - change ODS.

 For some encoder is frangmentation lost 1-2 byte, for another
 can be more.
 For some encoder is easy to do reverse parsing, for some other
 is much more complicated.

 For some situation can be generation of control stream benefit,
 but as is now in sources (FB2.5, FB3) that I read, it is not.

 Current compressor interface:
 to create control stream:
 ULONG

Re: [Firebird-devel] Recore level compresion imroovement

2015-03-16 Thread Slavomir Skopalik

Hi Jim,
I have only my DBs, that is designed for short record length (on disk).
I looking for some real examples, but is not easy to get it.

Some data from my DB (new RLE):

  Primary pointer page: 384, Index root page: 385
Total formats: 1, used formats: 1
Average record length: 31.20, total records: 125819782
Average version length: 0.00, total versions: 0, max versions: 0
Average fragment length: 0.00, total fragments: 0, max fragments: 0
Average unpacked length: 8038.00, compression ratio: 257.65
Pointer pages: 119, data page slots: 385896

About packed size trashold,
dictionary based compresions are ineficient on small size data.

http://wiki.illumos.org/display/illumos/LZ4+Compression

there some statistics:

https://www.illumos.org/attachments/822/lz4_compression_bench.ods

http://fastcompression.blogspot.cz/2013/08/inter-block-compression.html

and some frame information is needed (about 15 bytes per packed block).

For short records (as my in example) will be not help.
For long records (typical situation with filled note to anything) can 
significatly helps.

Also it can be very efective on text blobs, mainly stored in HTML format.

Finally, I'm not sure that 4kB is good trashhold, but I belive, that 
some trashold will be needed.


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 16.3.2015 17:42, James Starkey wrote:

I'd like to see some numbers computed from an actual (real) Firebird
database before it is considered.

But why records only over 4k?  And what commonality do you expect to find
on large records?

On Monday, March 16, 2015, Slavomir Skopalik skopa...@elektlabs.cz wrote:


Hi Jim,
I made some research about storage compresion and I found this project:

https://code.google.com/p/lz4/

My idea is to use this only if encoded size of record will be more than
aprox 4Kb.

Do you have any note, why it can be bad idea?

Thanks Slavek

PS: I was made some changes in Firebird to rip out compresor from
storage engine (and put new RLE in second step, encoding in third step),
but it was rejected by comunity :)

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz javascript:;
http://www.elektlabs.cz

On 1.3.2015 18:55, Slavomir Skopalik wrote:

Hi Jim,
my proposal was not so abstract as your.

I just want to put all parts of encoding/decoding  into one class with
clear interface that will able
to put different ecoder in development time (FB3+).

I will contact Firebird developer to make agreement about changes is
this class:

class Compressor : public Firebird::AutoStorage

If will be posible to have access in record format, it can be easy to

create

self-described encoding.
I have in my mind a idea of this schema that I woudlike to test it.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz javascript:;
http://www.elektlabs.cz

On 28.2.2015 22:43, Jim Starkey wrote:

OK, I think I understand what you are trying to do -- and please

correct me if I'm wrong.  You want to standardize an interface between an
encoding and DPM, separating the actual encoding/decoding from the
fragmentation process.  In other words, you want to compress a record in
toto then let somebody else to chop the resulting byte stream to and from
data pages.  In essence, this makes the compression scheme plug replaceable.

If this is your intention, it isn't a bad idea, but it does have

problems.  The first is how to map a given record to a particular decoding
schema.  The second, more difficult, is how to do this without bumping the
ODS (desirable, but not essential).  A third is how to handle encodings
that are not variations on run length encoding (such as value based
encoding).

If I'm on the right track, do note

Re: [Firebird-devel] Recore level compresion imroovement

2015-03-16 Thread Slavomir Skopalik
Hi Karol,
in current simplified stack you have:

execution engine
record level compresion
page storage
cache
hdd

On small records, you have CPU problem, but on large records you can 
simply utilize HDD.

To utilize HDD tou have to meet at least ~200MB/s on common CPU.
Target is: be able utilize HDD but store/read more real data :).

Slavek

On 16.3.2015 19:57, liviusliv...@poczta.onet.pl wrote:

 There are many compressors
 I use this
 http://www.7-zip.org/ open source GNU LGPL
 for sending compresed packets between my apps.
 It is very very fast and whth highest commpresion level
 But i do not know any technical details about FB needs

 regards,
 Karol Bieniaszewski




 --
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel



--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Proposal for value encoding schema

2015-03-02 Thread Slavomir Skopalik
Hi all,
after discusion with Jim, I was created frist draft of proposal for 
value encoding.

Pros: efectivity for numerical values including date and time
Cons: poor for strings (can be worst then current), can be solved by 
apply RLE (not current) over final result.

It is designed as simple (there is space for improovement, but general 
value distribution function should be know)
and easy to implement.
It is use data from record format to skip length storage where is posible.

Control byte= code:3bits + Value:5bits

CodeValueMeaning
====
0
 0NULL
 1Float Nan
 2-31Unused
1
 0Fixed Binary (length comes from record format)
 1-31Unused
2Int5- (-16~15) from value
3Int13+1byte
4Int21+2bytes
5Int29+3bytes
6Int37+4bytes
7Int45+5bytes

DataType in format structure:

INT - stored by absolute value, fixed binary for INT greater 
than 2^45
VARCHAR- int (length) + bytes[lenght]
CHAR- VARCHAR + fill space
DATE- Value-'1.1.2016' stored as int (nuber of days)
TIME- Time - '12:00:00' PM (noon) and stored as INT (what is 
requested precision?)
TIMESTAMP- as DATE followed by TIME
FLOAT- integer value stored as integer, else Fixed binary (4)
DOUBLE PRECISION- integer value stored as integer, else Fixed binary (8)
DECIMAL- Stored as INT
BLOBID- stored as integer


Record format:
uint32tr_id
uint8format
data


Any comments?

Slavek

-- 
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz



--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-03-01 Thread Slavomir Skopalik
Hi Jim,
my proposal was not so abstract as your.

I just want to put all parts of encoding/decoding  into one class with 
clear interface that will able
to put different ecoder in development time (FB3+).

I will contact Firebird developer to make agreement about changes is 
this class:

class Compressor : public Firebird::AutoStorage

If will be posible to have access in record format, it can be easy to create
self-described encoding.
I have in my mind a idea of this schema that I woudlike to test it.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 28.2.2015 22:43, Jim Starkey wrote:
 OK, I think I understand what you are trying to do -- and please correct me 
 if I'm wrong.  You want to standardize an interface between an encoding and 
 DPM, separating the actual encoding/decoding from the fragmentation process.  
 In other words, you want to compress a record in toto then let somebody else 
 to chop the resulting byte stream to and from data pages.  In essence, this 
 makes the compression scheme plug replaceable.

 If this is your intention, it isn't a bad idea, but it does have problems.  
 The first is how to map a given record to a particular decoding schema.  The 
 second, more difficult, is how to do this without bumping the ODS (desirable, 
 but not essential).  A third is how to handle encodings that are not 
 variations on run length encoding (such as value based encoding).

 If I'm on the right track, do note that the current decoding schema already 
 fits your bill.  Concatenate the fragments and decode.  The encoding process, 
 on the other hand, is more problematic.

 Encoding/decoding in place is more efficient than using a temp, but not so 
 much as to preclude it.  I might be wrong, but I doubt that the existing 
 schema shows up as a hot spot in a profile.  But that said, I'm far from 
 convinced that variations on a run length theme are going to have any 
 significant benefit for either density or performance.

 My post Interbase database systems don't access records on page (NuoDB 
 doesn't even have pages).  Records have one format in storage and others 
 formats in memory within a record class that understands the transitions 
 between formats (essentially doing the various encode and decoding).  There 
 are generally an encoded form (raw byte stream), a descriptor vector for 
 buiding new records, and some sort of ancillary structure for field 
 references to either.

 In my mind, I think it would be wiser to Firebird to go with a flexible 
 record object than to simply abstract the encoding/decoding process.  More 
 code would need to be changed, but when you were done, there would be much 
 less code.

 Architecturally, abstracting encoding/decode makes sense, but practically, I 
 don't it buys much.  A deep reorganzation, I believe, would have a much 
 better long term payoff.

 But then maybe I missed your point...

 Jim Starkey


 On Feb 28, 2015, at 10:30 AM, Slavomir Skopalik skopa...@elektlabs.cz 
 wrote:

 Hi Jim,
 I don't want to change ODS for saving one byte per page.
 I want to change sources to be able implement different
 encoder (put name that you want) - change ODS.

 For some encoder is frangmentation lost 1-2 byte, for another
 can be more.
 For some encoder is easy to do reverse parsing, for some other
 is much more complicated.

 For some situation can be generation of control stream benefit,
 but as is now in sources (FB2.5, FB3) that I read, it is not.

 Current compressor interface:
 to create control stream:
 ULONG SQZ_length(const SCHAR* data, ULONG length, DataComprControl* dcc)

 to create final stream from control stream:
 void SQZ_fast(const DataComprControl* dcc, const SCHAR* input, SCHAR*
 output)

 To calculate how many bytes can be commpressed into small area (from
 control stream):
 USHORT SQZ_compress_length(const DataComprControl* dcc, const SCHAR*
 input, int space)

 To compress into small area:
 USHORT SQZ_compress(const DataComprControl* dcc, const SCHAR* input,
 SCHAR* output, int space)

 and decomress:
 UCHAR* SQZ_decompress(const UCHAR*input,  USHORTlength,
 UCHAR*output,   const UCHAR* constoutput_end)

 And some routines is directly in storage code.

 In FB3 is very similar (changed names, organized into class, same hack
 in store_big_record(problem is not code itself, but where the code is)).

 The question is:
 Why keep control stream (worst CPU, litle worst HDD, and also important
 for me - worst readable code)?
 It seems to be, that was implemented this way

Re: [Firebird-devel] Recore level compresion imroovement

2015-02-28 Thread Slavomir Skopalik
Hi Jim,
I don't want to change ODS for saving one byte per page.
I want to change sources to be able implement different
encoder (put name that you want) - change ODS.

For some encoder is frangmentation lost 1-2 byte, for another
can be more.
For some encoder is easy to do reverse parsing, for some other
is much more complicated.

For some situation can be generation of control stream benefit,
but as is now in sources (FB2.5, FB3) that I read, it is not.

Current compressor interface:
to create control stream:
ULONG SQZ_length(const SCHAR* data, ULONG length, DataComprControl* dcc)

to create final stream from control stream:
void SQZ_fast(const DataComprControl* dcc, const SCHAR* input, SCHAR* 
output)

To calculate how many bytes can be commpressed into small area (from 
control stream):
USHORT SQZ_compress_length(const DataComprControl* dcc, const SCHAR* 
input, int space)

To compress into small area:
USHORT SQZ_compress(const DataComprControl* dcc, const SCHAR* input, 
SCHAR* output, int space)

and decomress:
UCHAR* SQZ_decompress(const UCHAR*input,  USHORTlength,   
UCHAR*output,   const UCHAR* constoutput_end)

And some routines is directly in storage code.

In FB3 is very similar (changed names, organized into class, same hack 
in store_big_record(problem is not code itself, but where the code is)).

The question is:
Why keep control stream (worst CPU, litle worst HDD, and also important 
for me - worst readable code)?
It seems to be, that was implemented this way because of RAM limitation.

And another question:
What functions and parameters have been in new interface?

If you have idea how to use control stream with benefits, please share it.

Slavek

BTW: If we drop control stream, posted code will reduce to one movecpy 
that is implemented by SSE+ instructions.


On 28.2.2015 14:16, James Starkey wrote:
 I regret both that I don't have a copy of Firebird source on the boat or
 access to adequate bandwidth to get it, so I'm not in a position to comment
 on tge existing code one way or another.  But as I understand your
 proposal, you are suggestion the the ODS be changed to save (at most) one
 byte per 4,050 bytes (approximately) of very large fragmented record.  That
 isn't much of a payback.

 But looking at your code below, it would be much faster if you just
 declared your variables as int and get rid of the casts.  All the casts are
 doing for you is forcing the compiler to explicitly truncate the results to
 16 bits, which is not necessary.

 I am aware that it is stylish to throw in as many casts and consts as
 possible, but simple type safety is both faster and more readable.

 I don't mean to dump on your proposal, but if you're going to make a
 change, make a change worth doing.  I'm not a insisting that Firebird adopt
 value based encoding as that is a choice for the guys doing the
 implementing.  I did make the change from run length encoding to value
 based encoding in Netrastructure and found it reduced on-disk record sizes
 by 2/3.

 And, incidentally, the existing code that you deride as a hack is probably
 also my code, though probably reworked by half dozen folks over the years.
 Still, I would prefer the term archaic historical artifact to hack as
 it was written on a 1 MB Apollo DN 330 running a 68010, approximately the
 norm for workstations circa 1984.  Machines have changed since then, and
 with them, the tradeoffs.

 On Friday, February 27, 2015, Slavomir Skopalik skopa...@elektlabs.cz
 wrote:

   Hi Jim,
 I don't tell your scheme hack, this is misunderstanding.
 I tell, that current implementation of RLE in firebird is hack
 (parsing RLE control stream outside compresor/decompresor in reverse
 order).

 If I replace current RLE by anything, I have to do same/worst hack(s).
 And I don't want to go this way (wastage time for bad implementation).
 Please look in code first.


 http://sourceforge.net/p/firebird/code/HEAD/tree/firebird/branches/B2_5_Release/src/jrd/dpm.epp

 // Move compressed data onto page

  while (length  1)
  {
  // Handle residual count, if any
  if (count  0)
  {
  const USHORT l = MIN((USHORT) count, length - 1);
  USHORT n = l;
  do {
  *--out = *--in;
  } while (--n);
  *--out = l;
  length -= (SSHORT) (l + 1); // bytes 
 remaining on page
  count -= (SSHORT) l;// bytes remaining in 
 run
  continue;
  }

  if ((count = *--control)  0)
  {
  *--out = in[-1];
  *--out = count;
  in += count;
  length -= 2

Re: [Firebird-devel] Recore level compresion imroovement

2015-02-27 Thread Slavomir Skopalik

Hi Jim,
I don't tell your scheme hack, this is misunderstanding.
I tell, that current implementation of RLE in firebird is hack
(parsing RLE control stream outside compresor/decompresor in reverse order).

If I replace current RLE by anything, I have to do same/worst hack(s).
And I don't want to go this way (wastage time for bad implementation).
Please look in code first.

http://sourceforge.net/p/firebird/code/HEAD/tree/firebird/branches/B2_5_Release/src/jrd/dpm.epp

// Move compressed data onto page

while (length  1)
{
// Handle residual count, if any
if (count  0)
{
const USHORT l = MIN((USHORT) count, length - 1);
USHORT n = l;
do {
*--out = *--in;
} while (--n);
*--out = l;
length -= (SSHORT) (l + 1); // bytes 
remaining on page
count -= (SSHORT) l;// bytes remaining in 
run
continue;
}

if ((count = *--control)  0)
{
*--out = in[-1];
*--out = count;
in += count;
length -= 2;
}
}


As I wrote, it is imposible change encoding without refactoring current 
code base.


Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 28.2.2015 1:12, James Starkey wrote:

First, I take personal offense at your characterization of my empncoding
scheme as a hack.  It is not.  It is a carefully thought out scheme with
muktiple implementations into three database systems.  It has been
measured, compared, and extensively profiled.  I would be the last to cram
it down someones throat, but it is not a hack and I resent it being
referred to as such.

Secondly, the tests of an encoding scheme are density, cost of encoding,
and cost of decoding.  Your personal estimate of implementation cost
doesn't enter the equation.

What you consider normal for a Z80 doesn't carry all that much weight, at
least to me.


On Friday, February 27, 2015, Slavomir Skopalik skopa...@elektlabs.cz
wrote:


  Hi Jim,
I will try explain.

First, for any encoding schema, we need good interface that will be
respected by all other parts of program.
Now, the core of RLE is in one file, but some other parts of Firebird try
to parse RLE directly.
In this situation I need clean up code to use interface.
For imagine what is not really correct, look here:
dpm.cpp - static void store_big_record(thread_db* tdbb, record_param* rpb,
  PageStack stack,
  DataComprControl* dcc, ULONG size)

Second is encoding.
I agree, that your schema is better.
But currently is imposible to integrate into Firebird, because lack of
interface.
I will not replace one hack by another hack.
Also, same encoding schema will be able to use for backup or wire protocol.

Third:
Why I'm diassgree with current system control chars stream generation.
Current (FB2.5.3) allocate for control stream one half of record length.
My RLE needs 66% of record length for control sream.
Thats mean, you already allocated buffer with similar size as record
length.
But instead just copy, you will rescan, reallocate to get data that you
can already have.
CPU and HDD is worst, RAM is litle better (max 32kb safe during writing).
I don't see any real benefit.

Conclusion:
Is it posible to change mechanism from control chars stream into packed
stream (and create new interface for encoder/decoder) ?
If yes, how can I help.
If no, can be some hack like in store_big_record moved into SQZ ?

History info: I was designed my RLE for Zilog Z80 CPU on ZX Spectrum in
80'.
It is normally operate during commpression/decompression in same buffer.

Is it clear ?

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333skype:skopalikse

Re: [Firebird-devel] Recore level compresion imroovement

2015-02-27 Thread Slavomir Skopalik

Hi Jim,
what happens in current Firebird if records not fit in buffer:

1. Scan and calculate commpress length
2. If not fit, than scan control buffer and calculate, how many bytes 
will fit + padding

3. Compress into small area (scan again)
4. Find another free space on data page and goto 1 with unprocessed part 
of record.


I'm not sure, that is it faster than compress into buffer on stack, and
made few moves.

Why RLE now, because I have it, and I'm starting with FB sources two 
weeks ago.

It was easy to adpot RLE, but it was hard to understand padding.

Now, I woudlike to look into record encoding like you describe, but to 
be able to do,

I have to understand, why it is designed as is.

And another point of view,
cost of changes was small and impact on size and speed high - thats way 
I was did it.


You proposal will needs much more works.
From my point of view, isn't realistic to do it into FB2.5x or FB3.
When encoding will be implemented, will be nice to use it also for 
backup and wire protocol.


Thank you for.

Slavek


On 27.2.2015 16:40, James Starkey wrote:

The answer to your questions is simple:  It is much faster to encode from
the original record onto the data pages(s), eliminating the need to
allocate, populate, copy, and release a temporary buffer.

And, frankly, the cost of a byte per full database page is not something to
loose sleep over.

The competitive for a different compression scheme isn't the 30 year old
run length encoding but the self-describing, value driven encoding I
described earlier.

Another area where this is much room for improvement is the encoding of
multi-column indexes.  There is a much more clever scheme that doesn't
waste everything fifth byte.

On Friday, February 27, 2015, Slavomir Skopalik skopa...@elektlabs.cz
wrote:


Hi Vlad,
as I see, in some situation (that really happen), packing into small
area is padded by zeroes
(uncomress prefix with zero length).
And new control char added at begining next fragment (you will lost 2
bytes).
The differencies in current compression is not so much, but with better
one is more significant.

Finally, I still not understand, why is better to compress each fragment
separatly, instead
make one compressed block that will split into fragments.

If we have routine to compress/encode full record, we can easyly replace
curent RLE
by any other encoding schemna.

In current situation, is not easy replace corent RLE by value encoding
schema.

I finished new RLE, that is about 25% more efective than my previous post,
but I lossing lot of bytes on padding and new headers (and also 1 byte
per row to keep compatibility with previous DB).

I will clean up code and post here durign few days.

Also record differencies encoding can be improoved, I will do if
somebody will need it.

About update, I'm worry, that fragmented record will not add performace
gain durign update.

Slavek


 Not exactly so. The big record is prepared for compression as a

whole, then

tail of record is packed and put at separate page(s) and finally what

left

(and could be put on single page) is really re-compressed separately.


And when record is materialized in RAM all parts are reads and

decompress

separatly.

 What problem do you see here ? How else do you propose to decompress

fragmented

record ?



If comprossor cannot fit in small space, than rest of space is padded
(char 0x0 is in use).

 Record image in memory always have fixed length, according to record

format.

This wastage CPU and disk space.

 CPU - yes, Memory - yes, Disk - no.

 Also, note, it allows later to not waste CPU when fields are

accessed and

record is updated, AFAIU.

Regards,
Vlad



--

Dive into the World of Parallel Programming The Go Parallel Website,

sponsored

by Intel and developed in partnership with Slashdot Media, is your hub

for all

things parallel software development, from weekly thought leadership

blogs to

news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at

https://lists.sourceforge.net/lists/listinfo/firebird-devel



--
Dive into the World of Parallel Programming The Go Parallel Website,
sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for
all
things parallel software development, from weekly thought leadership blogs
to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel





--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed

Re: [Firebird-devel] Recore level compresion imroovement

2015-02-27 Thread Slavomir Skopalik
Hi,
I was investigate more about record storage and I found this:
If record going to be fragmented than each part are compressed
separatly.
And when record is materialized in RAM all parts are reads and decompress
separatly.
If comprossor cannot fit in small space, than rest of space is padded 
(char 0x0 is in use).

This wastage CPU and disk space.

Do anybody have a idea, why firebird did this this way?

Thanks Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 23.2.2015 7:38, Dmitry Yemanov wrote:
 I didn't look at the code closely, but the idea is more or less the same
 as I was considering for CORE-4401. I just wanted to use the control
 char of zero for that purpose, as it's practically useless for either
 compressible or non-compressible runs.

 The new encoding affects the ODS, so it cannot be used in the v2.5
 series (it may be possible with ODS 11.3 but I don't think we need a
 minor ODS change in v2.5). But it surely could be applied to v3 after
 review and we don't have to worry about backward compatibility in ODS 12.


 Dmitry


 --
 Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
 from Actuate! Instantly Supercharge Your Business Reports and Dashboards
 with Interactivity, Sharing, Native Excel Exports, App Integration  more
 Get technology previously reserved for billion-dollar corporations, FREE
 http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel




--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-02-27 Thread Slavomir Skopalik
Hi Vlad,
as I see, in some situation (that really happen), packing into small 
area is padded by zeroes
(uncomress prefix with zero length).
And new control char added at begining next fragment (you will lost 2 
bytes).
The differencies in current compression is not so much, but with better 
one is more significant.

Finally, I still not understand, why is better to compress each fragment 
separatly, instead
make one compressed block that will split into fragments.

If we have routine to compress/encode full record, we can easyly replace 
curent RLE
by any other encoding schemna.

In current situation, is not easy replace corent RLE by value encoding 
schema.

I finished new RLE, that is about 25% more efective than my previous post,
but I lossing lot of bytes on padding and new headers (and also 1 byte 
per row to keep compatibility with previous DB).

I will clean up code and post here durign few days.

Also record differencies encoding can be improoved, I will do if 
somebody will need it.

About update, I'm worry, that fragmented record will not add performace 
gain durign update.

Slavek

 Not exactly so. The big record is prepared for compression as a whole, 
 then
 tail of record is packed and put at separate page(s) and finally what left
 (and could be put on single page) is really re-compressed separately.

 And when record is materialized in RAM all parts are reads and decompress
 separatly.
 What problem do you see here ? How else do you propose to decompress 
 fragmented
 record ?


 If comprossor cannot fit in small space, than rest of space is padded
 (char 0x0 is in use).
 Record image in memory always have fixed length, according to record 
 format.

 This wastage CPU and disk space.
 CPU - yes, Memory - yes, Disk - no.

 Also, note, it allows later to not waste CPU when fields are accessed and
 record is updated, AFAIU.

 Regards,
 Vlad

 --
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel




--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Recore level compresion imroovement

2015-02-27 Thread Slavomir Skopalik

Hi Jim,
I will try explain.

First, for any encoding schema, we need good interface that will be 
respected by all other parts of program.
Now, the core of RLE is in one file, but some other parts of Firebird 
try to parse RLE directly.

In this situation I need clean up code to use interface.
For imagine what is not really correct, look here:
dpm.cpp - static void store_big_record(thread_db* tdbb, record_param* rpb,
 PageStack stack,
 DataComprControl* dcc, ULONG size)

Second is encoding.
I agree, that your schema is better.
But currently is imposible to integrate into Firebird, because lack of 
interface.

I will not replace one hack by another hack.
Also, same encoding schema will be able to use for backup or wire protocol.

Third:
Why I'm diassgree with current system control chars stream generation.
Current (FB2.5.3) allocate for control stream one half of record length.
My RLE needs 66% of record length for control sream.
Thats mean, you already allocated buffer with similar size as record length.
But instead just copy, you will rescan, reallocate to get data that you 
can already have.

CPU and HDD is worst, RAM is litle better (max 32kb safe during writing).
I don't see any real benefit.

Conclusion:
Is it posible to change mechanism from control chars stream into packed 
stream (and create new interface for encoder/decoder) ?

If yes, how can I help.
If no, can be some hack like in store_big_record moved into SQZ ?

History info: I was designed my RLE for Zilog Z80 CPU on ZX Spectrum in 80'.
It is normally operate during commpression/decompression in same buffer.

Is it clear ?

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 27.2.2015 19:14, James Starkey wrote:

Perhaps a smarter approach would be to capture the run lengths on the first
scan to drive the encoding.  I vaguely remember that the code once did
something like that.

Could you describe your scheme and explain why it's better?  Run length
encoding doesn't seem to lend itself to a lot of optimizations.  It's
actually a bad scheme that just to be better than the alternatives (then
available).

Historical note:  The DEC JRD was part of disk engineering's database
machine program.  The group manager was somewhat upset that we were doing
data compression at all -- DEC, after all, sold disk drives.  I explained
that it was really an important performance optimization to minimize disk
read and writes, which seemed to have mollified him.  Before that, it just
wasn't anything that database systems did.

On Friday, February 27, 2015, Slavomir Skopalik skopa...@elektlabs.cz
wrote:


  Hi Jim,
what happens in current Firebird if records not fit in buffer:

1. Scan and calculate commpress length
2. If not fit, than scan control buffer and calculate, how many bytes will
fit + padding
3. Compress into small area (scan again)
4. Find another free space on data page and goto 1 with unprocessed part
of record.

I'm not sure, that is it faster than compress into buffer on stack, and
made few moves.

Why RLE now, because I have it, and I'm starting with FB sources two weeks
ago.
It was easy to adpot RLE, but it was hard to understand padding.

Now, I woudlike to look into record encoding like you describe, but to be
able to do,
I have to understand, why it is designed as is.

And another point of view,
cost of changes was small and impact on size and speed high - thats way I
was did it.

You proposal will needs much more works.
 From my point of view, isn't realistic to do it into FB2.5x or FB3.
When encoding will be implemented, will be nice to use it also for backup
and wire protocol.

Thank you for.

Slavek


On 27.2.2015 16:40, James Starkey wrote:

The answer to your questions is simple:  It is much faster to encode from
the original record onto the data pages(s), eliminating the need to
allocate, populate, copy, and release a temporary buffer.

And, frankly, the cost of a byte per full database page is not something to
loose sleep over.

The competitive for a different compression scheme isn't the 30 year old
run length encoding but the self-describing, value driven encoding I
described earlier.

Another area where this is much room for improvement is the encoding of
multi-column indexes.  There is a much more clever scheme that doesn't
waste everything fifth byte.

On Friday, February 27, 2015, Slavomir Skopalik skopa...@elektlabs.cz 
javascript:_e(%7B%7D,'cvml','skopa...@elektlabs.cz');
wrote:


  Hi Vlad,
as I see, in some situation

Re: [Firebird-devel] Record level compresion imroovement

2015-02-23 Thread Slavomir Skopalik

Hi,
for FB3 I will recomend more effective algoritm than hacking this 
current one.

If you are interested, I can specify.

I was made another test with release build windows 64 bit and results:

DB size decrese from 90GB - 60 GB.
Some select count(*) from table like this one:

Create Table ProductDataEx  (
idProduct TLongInt NOT NULL,
idMeasurand Smallint NOT NULL,
idMeasurementMode TSmallInt NOT NULL,
ValIndex Smallint Default 0 NOT NULL,
idPeople TSmallInt NOT NULL,
tDate TimeDateFutureCheck NOT NULL,
Value1 Double precision NOT NULL,
Description TMemo,
Constraint pk_ProductDataEx Primary Key 
(idProduct,idMeasurand,idMeasurementMode,ValIndex)

);

Decrease from ~150s(any run) - 52s for first run and 36s another run.

This modifycation can read old DB, but after write, previous server will 
failed.

So, if I can I will vote to 2.5.4 (FB3 is so far).

Also I was made some speed optimization, this version is faster, then 
previous one.


If somebody else is interesting in this, I can put my private buid for 
Win64 on my web site.


Best regards Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 23.2.2015 7:38, Dmitry Yemanov wrote:

I didn't look at the code closely, but the idea is more or less the same
as I was considering for CORE-4401. I just wanted to use the control
char of zero for that purpose, as it's practically useless for either
compressible or non-compressible runs.

The new encoding affects the ODS, so it cannot be used in the v2.5
series (it may be possible with ODS 11.3 but I don't think we need a
minor ODS change in v2.5). But it surely could be applied to v3 after
review and we don't have to worry about backward compatibility in ODS 12.


Dmitry


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel



/*
 *  PROGRAM:JRD Access Method
 *  MODULE: sqz.cpp
 *  DESCRIPTION:Record compression/decompression
 *
 * The contents of this file are subject to the Interbase Public
 * License Version 1.0 (the License); you may not use this file
 * except in compliance with the License. You may obtain a copy
 * of the License at http://www.Inprise.com/IPL.html
 *
 * Software distributed under the License is distributed on an
 * AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express
 * or implied. See the License for the specific language governing
 * rights and limitations under the License.
 *
 * The Original Code was created by Inprise Corporation
 * and its predecessors. Portions created by Inprise Corporation are
 * Copyright (C) Inprise Corporation.
 *
 * All Rights Reserved.
 * Contributor(s): __.
 *
 * Compressed format:
 *   signed char control_Char playload
 Control_Char:
  0..127:   Uncompressed data in payload of length 
Control_Char
  -1:   128Bytes block of zeroes, payload 
contain one unsigned char, that represents count of these blocks
  -2:   Reserved for future use
  -128..-3  :   Compressed data, absolute value of 
Control_Char represents how many bytes have to be repeted, payload represent 
patern (one byte)
 *
 */

#include firebird.h
#include string.h
#include ../jrd/common.h
#include ../jrd/sqz.h
#include ../jrd/req.h
#include ../jrd/err_proto.h
#include ../jrd/gds_proto.h
#include ../jrd/sqz_proto.h


using namespace Jrd;

USHORT SQZ_apply_differences(Record* record, const SCHAR* differences, const 
SCHAR* const end)
{
/**
 *
 *  S Q Z _ a p p l y _ d i f f e r e n c e s
 *
 **
 *
 * Functional description
 *  Apply a differences (delta) to a record.  Return the length.
 *
 **/

if (end - differences  MAX_DIFFERENCES)
{
BUGCHECK(176);  /* msg 176 bad difference 
record */
}

SCHAR* p = (SCHAR*) record-rec_data;
const SCHAR* const p_end = (SCHAR*) p

Re: [Firebird-devel] Record level compresion imroovement

2015-02-23 Thread Slavomir Skopalik

Hi Jim,
can you explain more about your algorithm for  self-describing value 
encoding ?

I'm interesting in this.

Thank you Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 23.2.2015 14:13, James Starkey wrote:

I'm been using a self-describing value encoding for a decade and a half.
It's denser and cheaper to compress and decompress than the existing run
length encoding, though I'm not sure that compressing version delta would
be a lot of fun, but probably some clever fellow can think of a good
algorithm.

On Monday, February 23, 2015, Slavomir Skopalik skopa...@elektlabs.cz
wrote:


Hi,
for FB3 I will recomend more effective algoritm than hacking this current
one.
If you are interested, I can specify.

I was made another test with release build windows 64 bit and results:

DB size decrese from 90GB - 60 GB.
Some select count(*) from table like this one:

Create Table ProductDataEx  (
 idProduct TLongInt NOT NULL,
 idMeasurand Smallint NOT NULL,
 idMeasurementMode TSmallInt NOT NULL,
 ValIndex Smallint Default 0 NOT NULL,
 idPeople TSmallInt NOT NULL,
 tDate TimeDateFutureCheck NOT NULL,
 Value1 Double precision NOT NULL,
 Description TMemo,
Constraint pk_ProductDataEx Primary Key (idProduct,idMeasurand,
idMeasurementMode,ValIndex)
);

Decrease from ~150s(any run) - 52s for first run and 36s another run.

This modifycation can read old DB, but after write, previous server will
failed.
So, if I can I will vote to 2.5.4 (FB3 is so far).

Also I was made some speed optimization, this version is faster, then
previous one.

If somebody else is interesting in this, I can put my private buid for
Win64 on my web site.

Best regards Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 23.2.2015 7:38, Dmitry Yemanov wrote:


I didn't look at the code closely, but the idea is more or less the same
as I was considering for CORE-4401. I just wanted to use the control
char of zero for that purpose, as it's practically useless for either
compressible or non-compressible runs.

The new encoding affects the ODS, so it cannot be used in the v2.5
series (it may be possible with ODS 11.3 but I don't think we need a
minor ODS change in v2.5). But it surely could be applied to v3 after
review and we don't have to worry about backward compatibility in ODS 12.


Dmitry



--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631;
iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel





--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk


Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record level compresion imroovement

2015-02-23 Thread Slavomir Skopalik

Hi Jim,
thank you very much.

If I understand correctly,
you recomend to have record structure like this:
1. format version
2. nulls flags (each byte will code 8 fields, only nullable fields will 
have flag)

3. tr_id (encoded same way as other integer)
4. Only not null field encoded.
   For examples, it doesn't matter how is integer declared (16-64bit) 
it is encoded by real value.
   Also if float point number contain integer (it is offen) it can be 
stored as integer.

   With other data types it is similar.

For ranges and offsets you was performed some statistical analysis like
is described in Shannon theory of information.

Very good idea, thanks for this!

In firebird, I have not worry about record difference, because it is 
store really inefective

(no change diff for 32000 bytes record contain 250bytes).

About my RLE,
I had prepared algorithm that encode up 66 zeroes into one byte,
and in worst case add max 3 bytes for 64kb record.

Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

On 23.2.2015 16:31, James Starkey wrote:

The encoding works like this:  Each value consists of a type code followed
by zero or more bytes.  For integers, there are type codes for a range of
values, say -10 to 40, and codes for integers of length 1 to 8.  For
strings, there are type codes for strings from, say, 0 to 40 bytes,
followed immediately by the respective strings, and for strings with binary
counts from 1 to 4 bytes that are first followed by the count and the
respective string.  There are similar sets of codes for decimal scaled
integers, doubles, dates, etc.

So small integers are represented by a single byte.  Short strings are
presented by a byte plus the string.  The exact ranges for small integers
and lengths of small strings are more or less arbitrary.

I have also restricted strings to UTF-8 (which is a different argument),
but the encoding doesn't attach semantics to strings, so this isn't
strictly necessary.

No bytes are wasted in padding, high order binary zeros, or run lengths.

In memory, I have generally used a vector of 16 bit offsets to hold the
offsets of known fields and a high water mark, which minimizes parsing to
near zero.  Note that a simple static lookup table will give the lengths of
more types.  Counted values are represented by in the table by negative
count lengths.

I used the scheme in Netfrastructure/Falcon and again in NuoDB.  For
AmorphousDB I reimplemented a similar scheme with slightly different tuning.

Note that given the address and length of an encoded record, it is trivial
to validate a record and to print out formatted values for debugging.

On Monday, February 23, 2015, Slavomir Skopalikskopa...@elektlabs.cz
wrote:


  Hi Jim,
can you explain more about your algorithm for  self-describing value
encoding ?
I'm interesting in this.

Thank you Slavek

Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118333skype:skopalikse-mail:skopa...@elektlabs.cz  
javascript:_e(%7B%7D,'cvml','e-mail:skopa...@elektlabs.cz');http://www.elektlabs.cz

On 23.2.2015 14:13, James Starkey wrote:

I'm been using a self-describing value encoding for a decade and a half.
It's denser and cheaper to compress and decompress than the existing run
length encoding, though I'm not sure that compressing version delta would
be a lot of fun, but probably some clever fellow can think of a good
algorithm.

On Monday, February 23, 2015, Slavomir Skopalikskopa...@elektlabs.cz  
javascript:_e(%7B%7D,'cvml','skopa...@elektlabs.cz');
wrote:


  Hi,
for FB3 I will recomend more effective algoritm than hacking this current
one.
If you are interested, I can specify.

I was made another test with release build windows 64 bit and results:

DB size decrese from 90GB - 60 GB.
Some select count(*) from table like this one:

Create Table ProductDataEx  (
 idProduct TLongInt NOT NULL,
 idMeasurand Smallint NOT NULL,
 idMeasurementMode TSmallInt NOT NULL,
 ValIndex Smallint Default 0 NOT NULL,
 idPeople TSmallInt NOT NULL,
 tDate TimeDateFutureCheck NOT NULL,
 Value1 Double precision NOT NULL,
 Description TMemo,
Constraint pk_ProductDataEx Primary Key (idProduct,idMeasurand

Re: [Firebird-devel] Record level compresion imroovement

2015-02-23 Thread Slavomir Skopalik
On 23.2.2015 20:36, James Starkey wrote:
 Encode null as a value type and skip the null flags altogether -- saves a
 couple of bytes for every record.
I think to use flags only for nullable fields.
In this case, you will lost one byte per each NULL field, but only when 
is it NULL, I will be lost one byte per 8 nullable fields every time.
Special type value can be easier to parse.


 I'd encode the format version as the first value. That will let you have
 2^63 format versions, which should be enough.

 I'd getvthe transaction id in the record header rather than the record
 itself so it can be compared (a high frequency operation) with decoding.
It sounds like background for flash back function

http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_flashback.htm#ADFNS01001

Slavek


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Recore level compresion imroovement

2015-02-22 Thread Slavomir Skopalik

Hi all,
I was made litle improovement in record level compresion.

Motivation:
 - significant dabase grow increase after switch to UTF8

Analysis:
 - I was found inefficient algoritmus for compressing zeroes
- I was found, that some control char values are not used

Solution:
- I add to code ability to pack up 32k of zeroes into 2 bytes
without lossing current features.
- New version can read old database, but when write, old version cannot 
read.


Result:
DB size: DB with WIN2050 character set was packed from 733MB down to 633MB
Restore speed was increase from 3:06 down to 2:55 minutes.
Select operation was litle bit faster (depends on datafield)

I was made changes in actual trunk for version 2.5.4.

Please, review this code and let me know, I is it interesting for 
Firebird community.


Code is not so well optimized and also more effective and faster 
algorimus can be used in future.


Best regards Slavek

--
Ing. Slavomir Skopalik
Executive Head
Elekt Labs s.r.o.
Collection and evaluation of data from machines and laboratories
by means of system MASA (http://www.elektlabs.cz/m2demo)
-
Address:
Elekt Labs s.r.o.
Chaloupky 158
783 72 Velky Tynec
Czech Republic
---
Mobile: +420 724 207 851
icq:199 118 333
skype:skopaliks
e-mail:skopa...@elektlabs.cz
http://www.elektlabs.cz

/*
 *  PROGRAM:JRD Access Method
 *  MODULE: sqz.cpp
 *  DESCRIPTION:Record compression/decompression
 *
 * The contents of this file are subject to the Interbase Public
 * License Version 1.0 (the License); you may not use this file
 * except in compliance with the License. You may obtain a copy
 * of the License at http://www.Inprise.com/IPL.html
 *
 * Software distributed under the License is distributed on an
 * AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express
 * or implied. See the License for the specific language governing
 * rights and limitations under the License.
 *
 * The Original Code was created by Inprise Corporation
 * and its predecessors. Portions created by Inprise Corporation are
 * Copyright (C) Inprise Corporation.
 *
 * All Rights Reserved.
 * Contributor(s): __.
 *
 * Compressed format:
 *   signed char control_Char playload
 Control_Char:
  0..127:   Uncompressed data in payload of length 
Control_Char
  -1:   128Bytes block of zeroes, payload 
contain one unsigned char, that represents count of these blocks
  -2:   Reserved for future use
  -128..-3  :   Compressed data, absolute value of 
Control_Char represents how many bytes have to be repeted, payload represent 
patern (one byte)
 *
 */

#include firebird.h
#include string.h
#include ../jrd/common.h
#include ../jrd/sqz.h
#include ../jrd/req.h
#include ../jrd/err_proto.h
#include ../jrd/gds_proto.h
#include ../jrd/sqz_proto.h


using namespace Jrd;

USHORT SQZ_apply_differences(Record* record, const SCHAR* differences, const 
SCHAR* const end)
{
/**
 *
 *  S Q Z _ a p p l y _ d i f f e r e n c e s
 *
 **
 *
 * Functional description
 *  Apply a differences (delta) to a record.  Return the length.
 *
 **/

if (end - differences  MAX_DIFFERENCES)
{
BUGCHECK(176);  /* msg 176 bad difference 
record */
}

SCHAR* p = (SCHAR*) record-rec_data;
const SCHAR* const p_end = (SCHAR*) p + record-rec_length;

while (differences  end  p  p_end)
{
const SSHORT l = *differences++;
if (l  0)
{
if (p + l  p_end)
{
BUGCHECK(177);  /* msg 177 applied differences 
will not fit in record */
}
if (differences + l  end)
{
BUGCHECK(176);  // msg 176 bad difference record
}
memcpy(p, differences, l);
p += l;
differences += l;
}
else
{
p += -l;
}
}

const USHORT length = (p - (SCHAR *) record-rec_data);

if (length  record-rec_length || differences  end)
{
BUGCHECK(177);  /* msg 177 applied differences 
will not fit in record */
}

return length;
}


USHORT SQZ_compress(const DataComprControl* dcc, const SCHAR* input, SCHAR* 
output, int space)
{
/**
 *
 *  S Q Z _ c o m p r e s s
 *
 **
 *
 * Functional