Re: [Firebird-devel] Commit (un)certainity

2022-01-09 Thread Ann Harrison
>It is documented that successful return from function send() for TCP
> doesn't
> mean successful delivery of data to the target host, mere put them into
> socket
> buffer.
>If op_commit is sent but network error appear during waiting for
> response
> there can be two cases:
>1) op_commit packet is lost on its way to server;
>2) op_response is lost on its way to client.
>
>In the first case the transaction on server is rolled back in the
> latter -
> committed successfully.
>
>Is there a way to handle such situation? Using of two round-trips
> cannot
> solve the problem, only shift point of uncertainty.
>

Two phase commit does solve the problem, at considerable expense.

Cheers,

Ann
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] ODP: Modern C++: constexpr

2020-06-18 Thread Ann Harrison


> On Jun 17, 2020, at 12:52 PM, Adriano dos Santos Fernandes 
>  wrote:
> 
> There were UDR examples I wrote with master/slave terms and I'll change
> them.

For what little it’s worth, this is not a new discussion.  When the VAX was 
being developed (1980) there was a move to eliminate violent and sexist 
terminology.  Programs were run, not executed, for example.  The 
extension,alas, remained .exe  
Cheers,

Ann 

Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Introduction of EXTENDED TIME/TIMESTAMP WITH TIME ZONE?

2020-03-10 Thread Ann Harrison


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-6190) gbak does not do sweep unless sweep threshold exceeded

2019-11-18 Thread Ann Harrison
On Sat, Nov 16, 2019 at 4:08 PM T J C (JIRA) 
wrote:

gbak does not do sweep unless sweep threshold exceeded
>
>
> I had understoon that a backup would reset the Oldest Interesting
> Transaction (OIT)  as well as
> the Oldest Snapshot (OST) and had set the sweep interval to zero, and was
> doing nightly backups
> (see bottom for documentation issue)
>

I'm afraid you were confused.  Gbak does remove all the unneeded record
versions, deleted records,
and remnants of  failed transactions, which is the major benefit of a
sweep.  It never changes the
values of the OIT or OST on the header page.  The reason for that is
historic.  Gbak is a user level
application - reasonably smart, but user level.  Sweep is actually a
setting on an attachment and
as such it can do magic things like resetting values on the header page.

What is the OIT and why should anybody care.  Firebird maintains a bit for
every transaction that's
 ever been started.  If the bit is on, the transaction committed  If it's
off, the transaction is active or dead.
The first bit that's off is the Oldest Interesting Transaction.  What makes
it interesting?  Every transaction
older than that is committed - meaning that when a record is read and it's
transaction id is found to be
older than the OIT, it's good data.  Records with newer transaction ids
have to be checked against a bit
vector of transaction states to verify that their data was committed.

InterBase was created in the mid 1980's on computers that wouldn't power a
modern parking
meter.  Conserving memory was essential then, much less so now.
Maintaining a long bit vector of
transaction states cost a lot of memory, especially since the original
Firebird mode was Classic (can
you imagine?) and every connection had a copy of the bit vector.

Maintaining an up-to-date OIT is much less important now that computers
have thousands of times
more memory and in server mode, Firebird maintains one bit vector of
transaction states per database,
not one per connection.

Another change is the way failed transactions are handled.  Until about 20
years ago, the memory
cost of maintaining enough state to undo a transaction on rollback -
whether deliberate or through
a failed connection - was unsupportable.  When Firebird added savepoints,
it suddenly had everything
necessary to back out a failed transaction.  As far as I know, the only
time that a transaction is left
in failed/active state is after a crash - server, O/S, computer - which
keeps the clean up from happening.
So there just aren't as many problematic transactions as there were back in
the day.

So you're probably OK largely ignoring the OIT.  Run a sweep after a backup
once in a while and don't
worry about it much.   The Oldest Snapshot Transaction is important and
should be kept up to date, but
that's a question of transaction maintenance that neither sweep nor gbak
will affect.

>
> I have since found that when the sweep interval is set to zero, no sweep
> is done during a gbak backup
> regardless of the -g parameter. I would expect the sweep functionality and
> setting the OIT would be done
> when garbage collection is done.
>

The major sweep functionality is removing old record versions, deleted
records, and records created by
failed transactions. Gbak does that.  It just doesn't change the OIT.

>
> Of interest however, if the sweep interval is set below the (OST - OIT)
> immediately before the
> backup is done, then a gbak backup *DOES* do the sweep set the OIT value,
> so I assume
> the sweep is being done.
>

That's because an actual sweep was triggered.

>
> In my opinion, a gbak backup should be doing a sweep, regardless of the
> sweep interval, unless the -g option is specified.
>

You are entitled to your opinion.  It just doesn't align with the design of
gbak.

>
>
> It seems to me that this is at the least this is a documentation issue
> See https://firebirdsql.org/manual/gfix-housekeeping.html


Err.  That article has a couple of problems - some related to change since
1985 and some
an apparent confusion.  The major source of garbage in the database is old
record versions
that are not revisited, including deleted records.  More on that some other
time.

Good luck,

Ann
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Power efficient userspace waiting with the umwait x86 instructions

2019-09-18 Thread Ann Harrison



> On Sep 16, 2019, at 7:50 AM, Roman Simakov  wrote:
> 
> Hi,
> 
> I guess it would be interesting alternative for spinlock
> (https://kernelnewbies.org/Linux_5.3)
> 1.6. Power efficient userspace waiting with the umwait x86 instructions
> 
> More description is here:
> https://lwn.net/Articles/790920/
> 

Spinlocks in userspace are a performance disaster - here are some rhoughts from 
Jim Starkey ...

Spin locks were invented at DEC for use in either the RSX-11M+ or VMS kernel 
for a very specific and narrow purpose.  Spin locks were used when the OS was 
initiating a very short device request where the expected completion time was 
less than the overhead to setup for and process a device interrupt.  They were 
used sparingly and effectively.

For virtually any other purpose, spin locks are a huge net loss for a very 
simple reason: Hogging the processor waiting for another thread or process not 
only wastes processor cycles but a quite likely to present the other process 
and thread from running and releasing whatever resource had induced the spin.

The last great performance leap on the Falcon/InnoDB race at MySQL was when 
Google replaced InnoDB's spin locks with Falcon's user mode synchronization 
locks, giving InnoDB a 20% or 30% kick under heavy load.

Long (or indeterminate) locks are best handled with user mode read/write 
synchronization objects.  Very short term locks can be avoided through 
judicious use of non-interlocked data structures managed with compare-and-swap. 
 AmorphousDB, for example, uses non-interlocked hash tables for things like 
transactions and network operation tickets; objects in these hash tables can be 
safely removed by first moving them into an object purgatory that is safely 
emptied by a cycle manager once a second (or whatever).

There is an incredible among of non-sense written about spin locks by folks who 
tend to believe that any technique used by and OS kernel must be very cool and 
very efficient.

The related idea of an instruction that can stall a processor waiting on memory 
write is every bit as flawed as spin locks even if it does reduce the amount of 
power consumed while gumming up the OS scheduler.

-- 
Jim Starkey


Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] ODP: ODP: ODP: ODP: Inserts and FKs

2019-09-07 Thread Ann Harrison

> On Sep 7, 2019, at 8:58 AM, Karol Bieniaszewski  
> wrote:
> 
> You have right, there is a bug and the big one!
> I suppose that index of Foreign Key is not validated by existence of value in 
> the record itself and its (the record version) transaction numer is not 
> compared to snapshot number.
>  

It's also an old bug, probably dating to the implementation of foreign keys in 
InterBase.

The obvious implementation - validate the foreign key in the context of the 
client transaction - fails miserably in snapshot mode when the parent record is 
deleted by one transaction and a matching child record is inserted by a second 
concurrent transaction.  That leads to orphaned child records, which is very 
wrong.

The next possible implementation is to use the same internal omniscient mode 
that maintains unique and primary key constraints.  The omniscient mode sees 
the current state of the database is that there is a committed parent record 
that matches the proposed insert.  That  eliminates the orphan child problem, 
but introduces the problem Carlos discovered.  The transaction that stores the 
child record can "see" the master record for the purpose of validating the 
relationship between the two records, but for no other purpose.  That's 
slightly obscure case - updating the master when a child is stored tends to 
create a hotspot - but it's certainly legitimate.  

Good luck,

Ann


Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Inserts and FKs

2019-09-06 Thread Ann Harrison





Cheers,


Ann

> On Sep 6, 2019, at 8:24 AM, Mark Rotteveel  wrote:
> 
>> On 6-9-2019 01:46, Carlos H. Cantu wrote:
>> I understand that there are other scenarios where the currently FK
>> behavior is correct and makes sense, for example, in the case of
>> avoiding deleting a master record with "commited but not visible
>> childs",

Yes.  Unique, Primary Key, and Foreign Key constraints are handled in a special 
omniscient mode to avoid concurrent, incompatible changes. Triggers and check 
constraints operate in the mode of the user transaction. 

>> but for the reported example, the currently behavior looks
>> incorrect, and for people with business logic implemented in triggers,
>> it may/will lead to incorrect results.
> 
> I think you're right. You should only be able to insert records that 
> reference records that are visible to your transaction. Given Tx2 started 
> before Tx1 committed, the effects from Tx1 aren't visible to your 
> transaction. Your insert in Tx2 should fail as the master record from Tx1 
> doesn't exist from the perspective of Tx2.

Interesting.  In the case of inserting a child, the master must be visible to 
the transaction doing the insert.  In the case of deleting a master, the 
existence of a child - even if uncommitted must block the delete.  
> 
>> Does anyone knows if this behavior is following the Standard?
> 
> I don't think this behaviour is correct in view of the standard, but I 
> haven't looked it up.
> 

No, this behavior is not standard compliant.  

Good luck,

Ann
> 
> 
> Fireball https://lists.sourceforge.net/lists/listinfo/firebird-devel


Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Attaching non-pooled memory or other resources to memory pool lifecycle

2019-09-02 Thread Ann Harrison


> On Sep 2, 2019, at 6:26 AM, Alex Peshkoff via Firebird-devel 
> 
> When first pool-enabled classes were added to firebird ~15 years ago 

Double that - of course there weren’t classes thirty years ago.  

 Cheers,

Ann

Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Detecting a parameter is DB_KEY

2017-05-31 Thread Ann Harrison
> On May 30, 2017, at 5:06 PM, Mark Rotteveel  wrote:

> 
> BTW: isc_dpb_dbkey_scope=1 should extend it to session/connection scope

At some cost in garbage collection and an extra transaction start. 

Cheers,

Ann

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Detecting a parameter is DB_KEY

2017-05-30 Thread Ann Harrison

> On May 25, 2017, at 8:13 AM, Mark Rotteveel  wrote:
> 
>>> How can I find out if a parameter is a DB_KEY?
> 
> I'm implementing JDBC ROWID support in Jaybird

Does it matter that the ID is not consistent except within a transaction?

Cheers,

Ann

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-5538) Add ability to backup/restore only those (several) tables which are enumerated as command line argument (pattern)

2017-05-16 Thread Ann Harrison
This is far from a simple request and would require fundamental changes to 
gbak.  Gbak is a logical dump of database contents that when restored creates a 
new database.  What would a restore of a partial backup create?  A partial 
database?  An overwritten old database?

What benefit would this feature bring?

Regards,

Ann

> On May 15, 2017, at 9:44 AM, Pavel Zotov (JIRA)  
> wrote:
> 
> Add ability to backup/restore only those (several) tables which are 
> enumerated as command line argument (pattern)
> -
> 
> Key: CORE-5538
> URL: http://tracker.firebirdsql.org/browse/CORE-5538
> Project: Firebird Core
>  Issue Type: Improvement
>  Components: Engine, GBAK
>Reporter: Pavel Zotov
>Priority: Trivial
> 
> 
> gbak -? 2>&1 | findstr /i /c:"skip"
> 
>-SKIP_D(ATA)  skip data for all tables which are specified in
> the 
> 
> This command switch is useful when we want to skip SEVERAL but leave DOZEN of 
> tables, but it does NOT allow to solve opposite task: when we need to b/r 
> only several tables of their huge total number.
> Please consider to implement command-line switch like this:
> 
>-SKIP_E(CEPT)  skip data for all tables EXCEPT those which are 
> specified in the 
> 
> -- where  must follow SIMILAR_TO logic and rules.
> 
> 
> -- 
> This message is automatically generated by JIRA.
> -
> If you think it was sent incorrectly contact one of the administrators: 
> http://tracker.firebirdsql.org/secure/Administrators.jspa
> -
> For more information on JIRA, see: http://www.atlassian.com/software/jira
> 
> 
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-5507) Wrong value of the new field at the old records, created before that new field was added.

2017-03-24 Thread Ann Harrison
It's been a long time, but I think that's an ancient behavior that Jim and I 
argued about many years ago.  Maybe even in Rdb$ELN, InterBase's ancestor.  

Unless my memory fails me (again) the internal format rectifier doesn't go 
through all intermediate formats, it just converts from the stored format to 
the format requested.  I came up with some places where not considering the 
intermediate formats produced different results, but most were errors. Jim 
thought it was idiotic to go through a lot of extra work to discover errors 
that had been corrected.  My use case may have involved changing a field from 
varchar to double and back when the field contains alphabetic characters.  The 
non-errors might be that changing a field from double (format 1) to float 
(format 2) and back (format 3) had the result that format 1 records had 
truncated values (at the low end) when seen as format 2, and went back to full 
precision when viewed as format 3.

The case at hand includes a relatively new feature - new fields that are not 
null and include a default.

 Create a table with no field called "NewField".  (Format 1)
 Store a record with the primary key of 1.
Alter the table adding "NewField", not null, default "Ann".  (Format 2)
Store a record with the primary key 2 and no value for "New Field"
Alter the record again, changing the default to "Jim". (Format 3)
Store a record with the primary key 3 and no value for "New Field" 
Read all the records.

 1 Jim
2 Ann
3 Jim

The situation is that Firebird converts record 1 from format 1 to format 3 
without going through format 2.  If it had gone through format 2, the initial 
default value would be applied and you'd see

1 Ann
2 Ann
3 Jim

Should the behavior be changed?  It's ancient.  It has benefits (e.g. Changing 
a column from double to float and back).  The benefits are in dumb cases.  The 
new behavior might be more standard conformant, if the standard allows addind 
Not Null columns with defaults and the Standards Committee assumed that default 
values were added in the most crude way possible.  

Cheers,


Ann

Just explained this to Jim who said "That's a dumb case.  Who cares?"


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-5460) Insert NULL into identity column with auth generated value

2017-01-19 Thread Ann Harrison


> On Jan 19, 2017, at 1:46 AM, Dmitry Yemanov <firebi...@yandex.ru> wrote:
> 
> 19.01.2017 00:51, Ann Harrison wrote:
>> 
>> In what universe does that make sense?  The field is NOT NULL.  You're 
>> storing NULL in it.  That's an error.
> 
> I'd say it depends. What about a BEFORE trigger converting input NULL to 
> something valid before storing?

Sure, a before trigger can fix up bad values and avoid an error.  I haven't 
followed the development of the SQL standard for the past decade.   If it now 
says that assigning NULL to a column that disallows NULLs means that you should 
apply some other value in some cases, then I guess I know in what universe that 
makes sense.

So what does 0-FEB-2017 mean in this brave new world?

> 
> 
> Dmitry
> 
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-5460) Insert NULL into identity column with auth generated value

2017-01-18 Thread Ann Harrison
On Jan 18, 2017, at 11:17 AM, Gerhard S (JIRA)  wrote:
> 
> Insert NULL into identity column with auth generated value
> --
> 
> Key: CORE-5460
> URL: http://tracker.firebirdsql.org/browse/CORE-5460
> Project: Firebird Core
>  Issue Type: Improvement
>Affects Versions: 3.0.0
> Environment: Windows 10 64bit, LibreOffice 5.3.0RC1
>Reporter: Gerhard S
> 
> 
> Could you support inserting rows where the value for the identity column is 
> passed as NULL in order to increment the value automatically?.
> 
> Example:
> create table testtbl (
> id integer generated by default as identity (START WITH 0) NOT NULL primary 
> key,
> name varchar(15)
> );
> 
> insert into testtbl values (NULL, 'name1');
> 
> This only makes sense, if the column is NOT NULL, I guess. Other database 
> systems such as MySQL, HSQLDB, MariaDB allow that.


In what universe does that make sense?  The field is NOT NULL.  You're storing 
NULL in it.  That's an error.  Not an error only if there's no default value, 
not an error only if there's not a sequence.  It's an ERROR.  And quite typical 
of MySQL which tries to make life easier for developers by not giving an error 
if you store 10 in a 16 bit integer field - it stores 32767. -- you know, 
best try.  Like 0-FEB-2017 matches any day in February.  

Maybe if you assign 'ABC' to integer it should store 123?  Or the RAD50 values?


Death to cute hacks!

Good luck,

Ann
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] arithmetic exception, numeric overflow, or string truncation after change column datatype and select from older format view

2016-11-23 Thread Ann Harrison


> On Nov 22, 2016, at 7:26 PM, Leyne, Sean  wrote:
> 
> 
>> SQL>
>> SQL> select * from v;
>> 
>> A
>> ==
>> Statement failed, SQLSTATE = 22001
>> arithmetic exception, numeric overflow, or string truncation -string right
>> truncation
>> SQL>
>> 
>> 
>> As you can see.. there is no error until the records are inserted or updated
>> with a value that the length is greater than the previous size... In a way or
>> another, the view stores the datatype (record format ?), and if a string is
>> greater than that, the error is trown.. IIRC the same occurs with computed
>> columns (did not made a test case to check it out)
>> 
>> To make the things worse... The error is the one I consider most cryptic, 
>> since
>> you don't know the column name, the value and wich record is the culprit.. a
>> very annoying one to get on production...
>> 
>> The same error can be reproduced on FB 3.0.
>> 
>> What do you think about it ?
> 
> Think this is a perfect example of why DDL changes should only be performed 
> in single-user/connection mode.
> 
> I can't imagine how the engine could manage to keep the schema/object cache 
> and transaction context in sync or, in the absence of that, to 'broadcast' 
> that schema has changed and any existing cached object should be released as 
> soon as possible, to force the object definition to be reloaded. 

Exactly that mechanism does exist.  Every connection holds an existence lock on 
objects it has cached.  The lock includes a mechanism to notify the holder if 
an object has changed.  Since IB V1.  

Cheers,

Ann



--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Feature request & discussion for V4 (same as for V3)

2016-04-11 Thread Ann Harrison
On Mon, Apr 11, 2016 at 7:40 AM, Dimitry Sibiryakov 
wrote:

> 11.04.2016 13:28, Dmitry Yemanov wrote:
> > But it can be made possible. The question is whether it's worth it.
>
>While bug with orphan index nodes is living in the engine - index only
> scan is
> impossible completely.
>

Without transaction information in the index no purely indexed based scan
is possible.  Orphans don't make any difference.  It doesn't matter whether
the record isn't there or has had its key value  changed by a transaction
that's visible to the current transaction.

   Transformation numeric->double can lose data.


Err.  Not necessarily and probably there's a work around.  Conversions from
numeric to double are precise up to 56 bits.  For values greater than 56
bits, one could add the last byte of the value to the end of the mangled
double and get full precision.  With that, you could drop the special
indexes for INT64.



> Using integers as a key will disable
> altering of numeric columns. May be it worth considering.
>

Not necessary - Even with dealing with fractional values, decimal of fewer
than 57 bits will convert properly in both directions.  There may be some
slop if you try to compare the values exactly, but as long as the
conversion works a slight incompatibility doesn't matter.

   Transformation string->key by ICU does loose data, no way back.
>

I don't think that matters either.  If your collation is accent or case
insensitive, your lookup will also be case and accent insensitive.  You've
asked to lose that information, so it's loss is of no concern.

Cheers,

Ann
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial! http://pubads.g.doubleclick.net/
gampad/clk?id=1444514301=/ca-pub-7940484522588532Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Feature request & discussion for V4 (same as for V3)

2016-04-09 Thread Ann Harrison
On Fri, Apr 8, 2016 at 5:54 AM, Molnár Attila  wrote:

>
>
> *O*
> *ptimizations*- IS NOT NULL should use index. It is equivalent with
> >= min_value or <= max_value based on index direction
>

 Histograms and clustered indexes (if they're being considered) could help
here to detect cases where IS NOT NULL returns a small subset of the
records in a table.  In general, searches that touch more than half the
records in a table are more efficient when made in storage (natural) order
rather than through an index.  Remember that Firebird stored data and
indexes separately, so setting up an indexed retrieval that will touch
every page in a table is just overhead compared with straight-forwardly
reading every page.

- condition pre-evaluation and reduction. e.g.: WHERE 1 = 2 AND field =
> :param is always FALSE. Evaluation does not needed for all records, can
> decide at prepare time whether the result is an empty result set or an
> unfiltered result set.
>

When InterBase was created, there was a lot of academic work on optimizing
corner cases, with the result that academic databases tended to spend more
time optimizing than retrieving.  We made the deliberate choice not to
spend optimizer time saving idiots from themselves.  Thirty years later,
maybe we'd choose differently.  However, lots of programs depend on tricks
like +0 and concatenating with an empty string to coerce unnatural but
effective plans.  I'd worry about the damage done to those cases.


> - use index in "NATURAL" mode when column in a conditional appears in
> a multi column index, but not in the first place. You may reduce number of
> database page visits in this way : index page can hold more effective
> record data because it's narrower than the table data page record (also in
> worst case it could be worse than NATURAL because ot the mixed index and
> table data page read, but I think overall it could worth it, especially in
> big tables. measurements needed)
>

I not sure what you mean by "NATURAL" index mode - "natural" usually means
reading the data pages in storage order without any index. If you mean
reading across the leaf level of the index to find matches in the second
and subsequent keys in an index, you have no idea how hard that would be.
Firebird index keys are mashed up values created so they compare bytewise
in the desired order.  When using an index, Firebird hasn't a clue where
the boundaries fall between columns in multi-column index.  It's just
bytes.   The format makes indexes dense and comparisons quick.   Changing
the key format to support partial matches on second and third columns seems
like a bad idea, given that there's very little difference between having
an index on each column and a multi-column index.  Remember that Firebird
uses multiple indexes on a single table.



> - SELECT DISTINCT  FROM table is slow (natural scan on
> all records) and SELECT  FROM table GROUP BY  fields> is also slow (worse! : index scan on all records). I think in this
> case it's not necessary to read all the records in the table, it should be
> enough to read #of distinct  values from table. (currently
> you have to keep a separate table with this information because you can't
> access to this information fast)
>

Unh, no.  Indexes are multi-generational structures, so they often contain
more entries than there are records visible to any one transaction.  At a
minimum, you've got to touch the records that appear good candidates from
the index.

Good luck,

Ann
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial! http://pubads.g.doubleclick.net/
gampad/clk?id=1444514301=/ca-pub-7940484522588532Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] BLR contexts limit

2016-03-24 Thread Ann Harrison
On Thu, Mar 24, 2016 at 7:52 AM, liviuslivius 
wrote:

>
> > I'm not sure I get you here. How number of columns does relate to number
> > of contexts?
> >
>
> this is because Array DML(in Delphi) use execute block with parameters
> and all parameters/variables are counted as contexts
>
> PS. after fix this limit will be gone at all or will be increased to some
> value?
>
>

Err... they what?!!!  One context per parameter?  What do contexts have to
do
with parameters?  Talk about living on larks tongues - pick a scarce
resource
and use it profligately...

 .

Cheers,

Ann
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351=/4140Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: Tablespaces

2016-03-03 Thread Ann Harrison
On Thu, Mar 3, 2016 at 1:02 PM, Jim Starkey  wrote:

>
> >> Non-goals:
> >>
> ...
> >>   2. Store blobs in other than the table's data space.
> > Why not allow blobs to be separated from regular data ?
>
> OK, reasonable question.  Obviously they could, but it would require
> either storing small blobs off page or changing the mechanism used for
> blob ids, or both.  It also runs the risk of the records and blobs
> diverging, which is very, very bad.
>
> I think the benefit of separating records and blobs is quite limited.
> Large blobs have at least their rear end stored on blob pages that
> aren't scanned for exhaustive retrievals.  Moving them to a separate
> data space within the same table space so they don't share the record
> number space with records is well worth considering.
>

I think the problem is with the size of backups and the amount of time
taken by a backup/restore for a database with a significant number of
large blobs.

Cheers,

Ann
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: Tablespaces

2016-03-03 Thread Ann Harrison
On Wed, Mar 2, 2016 at 4:51 PM, Vlad Khorsun 
wrote:

>
>Blobs could be moved into separate tablespace. It could make backup of
> "data" tablespace faster and smaller. We can even think about "offline"
> tablespace.
>
>
Interesting idea. I'm not quite sure how it would work ... new blobs should
be
backed up, I'd guess, and old ones ignored.  That's pretty far from the way
gbak
works.  But gbak may not be the right tool for terabyte databases.

 Cheers,

Ann
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: Tablespaces

2016-03-03 Thread Ann Harrison
On Wed, Mar 2, 2016 at 4:23 PM, Dmitry Yemanov  wrote:

>

> When we speak about tablespaces, it usually means that the database
> consists of multiple files and different database object are stored in
> different files. Each such file is named within a database and called a
> tablespace.
>

It's probably not a reasonable concern at this point, but applying Oracle
tuning mechanisms to a Firebird database probably won't have the
performance benefits users expect.  It's going to take a lot of education to
convince people to use a feature they think they understand in a way that
work with Firebird,  Maybe I'm just arguing for a different name.

Cheers,

Ann
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] optional SET TERM

2015-10-15 Thread Ann Harrison

> On Oct 15, 2015, at 9:59 AM, Dimitry Sibiryakov  wrote:
> 
> 15.10.2015 15:51, marius adrian popa wrote:
>> In InterBase 7 is changed so procedure and trigger programming language to 
>> no longer
>> require the use of SET TERM
> 
>   IMHO, this is unnecessary complication of isql's parser. I'd prefer to 
> follow KISS concept.

Respectfully disagree.  Yes, it complicates isql, but it makes the user's life 
easier.   One bit of complication in isql, thousands of simpler to create 
triggers and procedures.  

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Altering collation of field

2015-09-27 Thread Ann Harrison
On Wed, Sep 23, 2015 at 5:25 AM, Jiří Činčura  wrote:

> Nobody?
>
> --
> Mgr. Jiří Činčura
> Independent IT Specialist
>
> On Wed, Sep 16, 2015, at 12:28, Jiří Činčura wrote:
> > Hi *,
> >
> > there's currently no way of altering collation of field in 2.5.4, right?
> > I checked all the possible documentation on firebirdsql.org and haven't
> > found a way. So only system table modification seems to be an option.\
>

Err, I wouldn't trust that.  Changing the collation requires rebuilding
indexes on the field.  I doubt that the logic was built into the system
table level updates to recreate indexes when a collation changes.

Good luck,

Ann


>
>
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Preventing error code collision

2015-07-27 Thread Ann Harrison

  On Sun, Jul 26, 2015 at 5:15 PM, Vlad Khorsun wrote:
 
   
Or is there a reason to ignore those higher bits for the facility
 and code?
 
  I have no idea why ENCODE_ISC_MSG written in this way.
 
CLASS_MASK seems to not be used anywhere, or at least I can't
 remember
ever having seen an error code with the bit 30 (warning) or 31
 (info)
set. Or is it used somewhere internally as an in-band channel?
 
  Looks like something planned at the past (before Firebird) but
 never used...
 
 27.07.2015 1:24, Ann Harrison wrote:
  Firebird was based on InterBase which was based on Rdb/ELN, an
 implementation of DEC's [standard(!)] relational
  interface.  As part of DEC's VAX software empire, DSRI used DEC's error
 message facility.  Every project had a code and
  used it as a prefix to its error messages.



 On Mon, Jul 27, 2015 at 4:14 AM, Vlad Khorsun hv...@users.sourceforge.net
  wrote:
Ann, thanks for explanation. But, it is still not clear why
 ENCODE_ISC_MSG
 is more strict than necessary. It limits number of facilities by 31
 (bitmask
 used allows 255) and number of codes per facility by 16383 (instead of
 65535).

Currently, we have no problem with it, just curious...


The high bits in the codes may have been intended for severity ... I really
don't remember.  The high bits in the facility were to distinguish
relational database errors from other DEC product errors.  I'd feel free to
use any of them.

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Preventing error code collision

2015-07-26 Thread Ann Harrison
On Sun, Jul 26, 2015 at 5:15 PM, Vlad Khorsun hv...@users.sourceforge.net
wrote:


 
  Or is there a reason to ignore those higher bits for the facility and
 code?

I have no idea why ENCODE_ISC_MSG written in this way.

  CLASS_MASK seems to not be used anywhere, or at least I can't remember
  ever having seen an error code with the bit 30 (warning) or 31 (info)
  set. Or is it used somewhere internally as an in-band channel?

Looks like something planned at the past (before Firebird) but never
 used...


Firebird was based on InterBase which was based on Rdb/ELN, an
implementation of DEC's [standard(!)] relational interface.  As part of
DEC's VAX software empire, DSRI used DEC's error message facility.  Every
project had a code and used it as a prefix to its error messages.

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Dropping index on domain change?

2015-07-01 Thread Ann Harrison


 On Jul 1, 2015, at 9:34 AM, Dmitry Yemanov firebi...@yandex.ru wrote:
 
 01.07.2015 13:49, Vlad Khorsun wrote:
 
 Not sure what you mean by same? Let's say I change it from smallint to int.
 
 All kind of numbers have the same representation in index keys.
 
 Except BIGINT, IIRC.
 

Which was a mistake on Borland's part.  InterBase originally had an eight byte 
integer (called a quad) that supported the VAX datatype.  Indexes on quad used 
the same representation as other numeric types - a mutilated double precision.  
Since InterBase  Firebird always retrieve a range of values to make up for the 
imprecise conversion from decimal fractions to binary fractions, the loss of 
precision doesn't matter.  If the Borland engineers had really thought about 
it, they could have added a few extra bits at the end of the key to handle the 
exact precision, but instead they decided to add a new index key type.   

On some ODS change it might make sense to go back to a single format for all 
numeric indexes to make enlarging numbers easier.


Cheers,

Ann




--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] usage privileges

2015-03-29 Thread Ann Harrison

 On Mar 29, 2015, at 8:58 AM, Alex Peshkoff peshk...@mail.ru wrote:
 
 Currently access to sequences/generators and exceptions is not limited, 
 i.e. user not granted explicitly any rights can access sequences and 
 exceptions. I wonder - who added that privileges in such way? Is it WIP 
 or a bug that requires fixing?

I can only speak to generators which were added a long time ago.  At that time, 
InterBase had two security models - a permissive mode that assumed all usage 
and allowed the administrator to restrict access, and the beginning of the SQL 
model which was used only to the extent it was defined in the standard, which 
didn't recognize generators.  So all access was allowed to generators by 
default.  I guess if somebody had asked, we'd have added the ability to 
restrict access.

Adding SQL style permissions will require some thought, since nobody has 
granted all rights to all on generators and suddenly restricting access to them 
will be a serious nuisance.

Cheers,

Ann
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Time to update our headers.

2015-03-14 Thread Ann Harrison

 On Mar 14, 2015, at 8:00 AM, Mark Rotteveel m...@lawinegevaar.nl wrote:
 
 I have skimmed the MPL 1.0, 1.1 and 2.0 but as far as I can tell it 
 never assigns specific rights to Netscape or Mozilla. Could you point 
 out where it does this?

In 1.0, the problems come in section 6.  I haven't checked 1.1.  V2.0 
specifically names the Mozilla foundation as steward of the license.
 
 I can't see any reason to muck around with licenses at all.
 
 Agreed.

Likewise.  
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Time to update our headers.

2015-03-13 Thread Ann Harrison
Jim wrote:

 MPL should never be used for Firebird code.  It gives specific rights to 
 Netscape.
 
And another several interesting and valuable lessons of history.   It's not 
true that he had to talk me out of GPL.  I knew that the goal of open source 
InterBase was to provide tools for future commercial developers who couldn't 
afford license fees charged by Oracle, SQLServer, or say Inprise...   Make it 
up with superior service.

 
 So to paraphrase Adriano, please state the problem before the solution.
 

And if the problem is Because ISO says so, let me share a bit of history 
between the Firebird project and ISO.  

Originally the open source InterBase was to be an Inprise affiliated company 
and we were all going to work together to improve the lives of database 
application developers.  Then things went wrong and suddenly there was 
Firebird, frozen open source InterBase, and InterBase an Inprise product.  Not 
to mention a bunch of very angry people.  

One of the first tasks (after drinking a lot) was to create a license so 
Firebird code would not belong to Inprise going forward.  For that, we took MPL 
(1.1, I believe) and removed references to Netscape or the Mozilla Foundation 
or MPL.  Mozilla had very clear rules that if you changed one word of the 
license, you couldn't call it MPL.  Fine.  Call it IDPL.

So I applied to the OSI for blessing for our license.  I thought it was a slam 
dunk, since it has all the goodness of MPL without any tie to a corporate 
entity.  No.  The OSI required that an attorney familiar with open source 
licenses explain in detail why this license met their standards and was 
different from all existing approved licenses.  This was in mid-2000.  
Attorneys familiar with the fine points of open source licenses were rare and 
usually affilated with one open source camp or another.  And even if one could 
be found, it would cost a couple of thousand dollars that we didn't want to 
spend on an attorney.

A couple of years later, when we did have some Firebird money, I asked again.  

Them:   No.  There are too many licenses.  Choose one and use it.  If you like 
MPL, use it.  

Me:  But it gives rights over our license and code to an entity that has 
nothing to do with us.

Them:  Tough.  There are too many licenses.

This was shortly after we'd had a confrontation with the Mozilla foundation 
about the probability that an open source database called Firebird could be 
confused with or harmed by an open source browser called Firebird.  That was 
the only time in my life I've had a death threat.   So the Mozilla Foundation 
wasn't high on my lists of organizations on whom Firebird's future should 
depend.  

OSI has a nice name and a nice logo.  But I still consider them pig-headded 
jerks.

Which has nothing to do with decisions made by Firebird fifteen years later, 
nor should it. Dmitry has given good reasons not to meddle with the licensing.  
By all means, change the headers to point to a more stable license repository, 
but don't change the license.

With best regards,

Ann










--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Time to update our headers.

2015-03-05 Thread Ann Harrison
I wrote:
 
 My recollection is that the IPL was worked out among Paul, me, Dale Fuller, 
 and a few others.  The original Mozilla licenses gave ownership rights to 
 Netscape.  We changed it to Inprise and its successors.  When Firebird 
 launched as an independent entity, we (same crowd, minus Dale) created the 
 IDPL which gave the individual developer ownership rights.  For better or for 
 worse, we didn't want any entity, including the not yet created Firebird 
 Foundation, to be able to take the code private or create closed source 
 enterprise versions.  

Another advantage of the IDPL is that if you write some useful new code - a new 
compression algorithm for example - you own that code even after applying the 
IDPL.  You can't disallow use under the IDPL, but you can include the code in 
other projects with other licenses.  Me, I want Firebird to be forever free.  
Jim wanted to be able to contribute code and use elsewhere.  We found common 
ground.

Cheers,

Ann
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Time to update our headers.

2015-03-05 Thread Ann Harrison

 On Mar 5, 2015, at 8:38 AM, marius adrian popa map...@gmail.com wrote:
 
 Ok understand , What i ask is that new code to be under MPL 2.0 without any 
 changes to the license text like they did in the libreoffice case  
 
 http://cgit.freedesktop.org/libreoffice/core/tree/COPYING.MPL
 This will simplify the license understanding (no more idpl/ipl license text 
 forks of mpl 1/1.1...)
 
Do we really want the Mozilla Foundation to be the Steward of our license?

Cheers,

Ann--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Odp: READ UNCOMMITTED implementation

2015-03-05 Thread Ann Harrison


 On Mar 5, 2015, at 11:26 AM, liviusliv...@poczta.onet.pl 
 liviusliv...@poczta.onet.pl wrote:
 
 Hi, 
 
 It is usefull for testing purposes. Consider monitoring what actually is in 
 some table from other connection or software before some long task has 
 finished.

Nothing is in a table until it is committed, so there's nothing to monitor.  


Best regards,

Ann
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Time to update our headers.

2015-03-05 Thread Ann Harrison


 On Mar 5, 2015, at 12:07 PM, Paul Beach pbe...@ibphoenix.com wrote:
 
 05.03.2015 16:13, Dimitry Sibiryakov wrote:
We are talking about sources (particularly headers), no?..
 
I mean that if you are going to update headers to a new license URL, it 
 would be better
 if it is official project's site URL, not IBPhoenix one.
 
 The IPL license hosted on IBPhoenix was the license originally used, i.e. we 
 hosted the original license that the InterBase code was
 released with (as Ann and I devised it). If I remember rightly the Inprise 
 license was slightly modified after release.
 
 We also hosted the IDPL license because again if I remember rightly, Jim 
 initially used the IDPL for the Vulcan work.
 

My recollection is that the IPL was worked out among Paul, me, Dale Fuller, and 
a few others.  The original Mozilla licenses gave ownership rights to Netscape. 
 We changed it to Inprise and its successors.  When Firebird launched as an 
independent entity, we (same crowd, minus Dale) created the IDPL which gave the 
individual developer ownership rights.  For better or for worse, we didn't want 
any entity, including the not yet created Firebird Foundation, to be able to 
take the code private or create closed source enterprise versions.  

Again, at the time, IBPhoenix had a website based on Netfrastructure and the 
Firebird Foundation didn't, since it didn't exist.  At this point inertia takes 
hold.  When you're working on code, the last thing you think about is updating 
headers.  When Netfrastructure ceased to be supported, IBPhoenix changed its 
website and the old addresses went away.

Cheers,

Ann
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Odp: 255 contexts limit

2015-02-01 Thread Ann Harrison


 On Feb 1, 2015, at 3:13 PM, liviusliv...@poczta.onet.pl 
 liviusliv...@poczta.onet.pl wrote:
 
 P.S. I do not know how context are counted?

Every data input source is a separate context.

 Every table in join?

Every input stream - so a self join has more contexts than tables.


 this is one context
 (SELECT FIRST 1 SKIP 30 Field_X FROM TABLE_X WHERE order),

Right.

 but this is two? I suppose that this one is true
 (SELECT FIRST 1 SKIP 30 Field_X FROM TABLE_X INNER JOIN TABLE_Y WHERE 
 order),

Yes.  That's two. This is a BLR issue, so a view reference uses only one 
context even if the view include multiple tables.  There was another issue with 
an internal compilation block that limited the total number of input sources, 
but I believe that was fixed some versions ago.


Good luck,

Ann
--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RDB$RELATIONS.RDB$SYSTEM_FLAG

2015-01-08 Thread Ann Harrison
On Thu, Jan 8, 2015 at 10:27 AM, Dimitry Sibiryakov s...@ibphoenix.com
wrote:



What were supposed to be tables/views with system flag  1?


As far as I know, the only value used was 2, which QLI used for its tables
of procedures and aliases.

Cheers,

Ann
--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.netFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Fields in data pages

2014-12-01 Thread Ann Harrison
Simon,

 
 Not to be rude or anything, but does FirstAID Extractor decrypt all types of 
 BLOBs?

Firebird, like InterBase, has a mechanism for translation among blob formats 
called blob filters.  There is a filter that translates the RDB$DESCRIPTOR 
format to a readable format.  Isql, the utility distributed with Firebird, uses 
that filter when you ask it to display an RDB$DESCRIPTPOR blob.  Similar 
filters translate blobs containing the binary format of BLR into something that 
makes sense to people.

As far as I know, all the non-text blobs used in the metadata have blob filters 
to return their contents as text.

The first place to look for the format of records is the catalog of system 
tables.  RDB$RELATIONS describes tables and views.  RDB$FIELDS describes 
domains - field definitions independent of their use in a particular table or 
view.  RDB$RELATION_FIELDS describes the use of a field in a table or view and 
allows some attributes of the domain to be overridden.  Among other information 
in RDB$RELATION_FIELDS are the field_id - a unique identifier for that field 
int that table, and the field_position which is the default ordering of fields 
in the output.  The field position can be changed.  The field id cannot.  When 
a field is dropped from a table, its field position can be reused, but its 
field id will never be reused.

So, going from the higher level system tables to RDB$FORMATS, when a table is 
created, Firebird creates format 1 for that table.  The descriptor contains a 
list of the fields identified by field id and described in terms of type and 
length.  When a table is altered, Firebird creates a new  RDB$FORMATS record 
that contains the new low-level description.  When a record is stored, the 
current format id  is included in the record header.  When a record is read, 
Firebird checks the format id, and if it is not the current format, updates the 
record to the current format, dropping old data from fields that were dropped, 
and creating new null (or default?) values for fields that had not existed in 
the in the old format.

A data page consists of two parts:  an index consisting of offsets and lengths 
and a data portion.  When looking for record 10 on a particular page, Firebird 
reads the 10th index entry to get the offset and length of the compressed 
record.  It reverses the run length compression to recreate the record data at 
its full declared length.   It then finds the format id in the record header, 
validates that the format is current or updates the record to the current 
format,  and then reads the record as necessary.  Obviously, if the record is 
fragmented across pages, the process is more complicated because the trailing 
portion of the record must be read before decompressing the data.

Good luck,


Ann
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Struct declarations

2014-11-30 Thread Ann Harrison


 On Nov 29, 2014, at 2:44 PM, Stuart Simon stuart...@gmail.com wrote:
 
 I am writing a research paper that involves the Firebird source code. I
 have found the Firebird Internals Reference, but it dates from 2009. My
 questions: Are the struct declarations still the same? Where could I find
 them (or their C++ equivalents) in the source code? Thank you!
   

I'm not sure whether you're interested in the on disk structure or the 
classes/structures that manage the run-time parts of Firebird.  The archives at 
IBPhoenix.com have some old papers on the ODS.  Not much has changed, though 
they're ten years old.  Search for Firebird for the Database Expert.  Runtime 
structures are more complex and volatile.  Generally the structure defintions 
are in the .h files.  The C structures for the ODS are in ods.h.


Good luck,

Ann--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Fields in data pages

2014-11-30 Thread Ann Harrison


 On Nov 30, 2014, at 6:35 PM, Stuart Simon stuart...@gmail.com wrote:
 
 OK, but version numbers tell me nothing. What I am looking for is an 
 explanation of how Firebird (the package) can tell which bytes belong to 
 which fields. Somehow Firebird must be able to tell the difference between 
 the following two records:
 
 FirstName LastName
 'Jim'   'Starkey'
 'Jim Starkey'  ''
 
 
 It may well be that it was coded in the Interbase days and has not been 
 looked up until now. Or maybe it's the blob in the RDB$DESCRIPTOR field, in 
 which case I do not know how to decode it into text.
 

No.   The format of the record is included in the record header, which is 
described in ods.h.  Use that number to look up the record in RDB$FORMATS. That 
tells you how long each field is - uncompressed.  On disk, each record is 
compressed using a run lenth compression of the whole record.  The first byte 
is a length.  If it's positive, the next n bytes are data.  If negiative, the 
next n bytes are the following byte value.  E.g.   0x3, 'Jim', 0x-20, ' ', 
0x7, 'Starkey', 0x23, ' ' etc. would be Jim(17 spaces)Starkey.  That would 
correspond to FirstName 23, LastName 30.

Good luck,

Ann
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Fields in data pages

2014-11-30 Thread Ann Harrison
sorrry, late tired. first name 20, not 23

Cheers,


Ann


 On Nov 30, 2014, at 8:38 PM, Ann Harrison aharri...@nimbusdb.com wrote:
 
 
 
 On Nov 30, 2014, at 6:35 PM, Stuart Simon stuart...@gmail.com wrote:
 
 OK, but version numbers tell me nothing. What I am looking for is an 
 explanation of how Firebird (the package) can tell which bytes belong to 
 which fields. Somehow Firebird must be able to tell the difference between 
 the following two records:
 
 FirstName LastName
 'Jim'   'Starkey'
 'Jim Starkey'  ''
 
 
 It may well be that it was coded in the Interbase days and has not been 
 looked up until now. Or maybe it's the blob in the RDB$DESCRIPTOR field, 
 in which case I do not know how to decode it into text.
 
 No.   The format of the record is included in the record header, which is 
 described in ods.h.  Use that number to look up the record in RDB$FORMATS. 
 That tells you how long each field is - uncompressed.  On disk, each record 
 is compressed using a run lenth compression of the whole record.  The first 
 byte is a length.  If it's positive, the next n bytes are data.  If 
 negiative, the next n bytes are the following byte value.  E.g.   0x3, 
 'Jim', 0x-20, ' ', 0x7, 'Starkey', 0x23, ' ' etc. would be Jim(17 
 spaces)Starkey.  That would correspond to FirstName 23, LastName 30.
 
 Good luck,
 
 Ann

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Thought about constraint declarations for V4

2014-10-27 Thread Ann Harrison
When we decided not to validate constraints on declaration, our reasoning
was that computations and database access were expensive and any decent
application programmer or DBA would always validate constraints before
declaring them and control access to the constrained items until the
constraint was successfully committed.

That was then.  Now, well, cycles are a lot easier to come by than good
developers.
I think it would be wise to add a [NO] VALIDATION modifier to constraint
definitions, including NOT NULL and referential integrity definitions.  In
the presence of a VALIDATION modifier, Firebird would begin enforcing the
constraint on commit (as now) and then start a pass to insure that the data
complies with the constraint.  I'd also be tempted to add a database
configuration option that makes validation the default.

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Indexes - things to think about for V

2014-10-27 Thread Ann Harrison
If, in Firebird 4, you're going to look at indexes, I have four suggestions.


1) Indexes with large keys

An index key that's a significant fraction of the page size leads to
inefficient indexes (or infinite depth), so rules about index size aren't
just the revenge of database management system developers on database
application designers.  There's a real problem with large keys.

Back sometime around Interbase 3 -  mid-1980's - we had an engineering
showdown about indexes.  Previously, the size of an index key was checked
at runtime, so a long string or concatenated index could cause a fatal
runtime error.  After much discussion, we decided to compute the largest
possible size for an index key at definition time and reject oversized
keys.  That decision was wrong.  Not that throwing an error at runtime was
right, but we didn't think about a possible third way.

Now UTF8 makes the problem much worse because the possible size of a key is
vastly larger than the probably size of the key.  Here's a possible third
way:  Store as much of the key as fits in a size appropriate for the given
page size.  Let prefix compression take care of the unfortunate case where
the first eighty or ninety characters are the same.  When inserting, if
there's a previous versions that matches for the key length, read the
record to decide whether to insert before or after.

Yes, that's going to mean reading more records - but only when the option
is not indexing the field at all, or indexing a substring function of it -
which performs worse.  Most UTF8 characters (or whatever the correct name
for them is ... glyphs?)  are only two bytes, so mostly Firebird is
refusing to create indexes that would never be a problem.  The problem
worse in compound indexes which are treated as if each field must be
included at its full length, rather than recognizing that many have
trailing spaces that are not included in the key.


2) Compound keys

The advantage of Firebird's algorithm is that individual values can be
suffix compressed, meaning that trailing spaces and zeros (in double
precision) are removed. . There's at least one algorithm for separating the
columns in compound keys that's more efficient than the one Firebird uses.
  The one I know about puts a binary zero byte between each column.  When
the column contains a byte of zero, replace it with binary bytes 0,1.  When
the column contains binary bytes 0,1, replace it with 0,1,1.  That's enough
to stop all confusion.  There are probably better algorithms.

3) Transaction IDs in index keys

 Someone mentioned including transaction ids in the index key to avoid
reading record data when using an index that provides all needed
information.  Those situations are real - count of customers in Mexico,
junction records for M to M relationships, etc.  In some cases, two
transaction ids are required - one for the transaction that created entry
and the transaction superseded it.  That's potentially 16 bytes subtracted
from the key size.  OK, maybe not so big a problem, but it also means that
when a key changes, the change must be made in two places.  But you know
all those arguments.

Jim's going to hate this.  Might it be possible to have a second sort of
index specifically for those cases where the read efficiency outweighs the
storage and management overhead?  Yes, one more place where the application
designer can be blamed for poor performance.


 4) Numeric indexes.

In my opinion, the decision to use 64 bit integers in indexes on numerics
larger than 9 digits and quads was boneheaded.  The advantage of double
precision keys is that the scale of an integer or numeric/decimal field can
be changed without invalidating the index.  That's a significant advantage.

Interbase supported quads on VAXen from the beginning using the same sort
of approximate logic I described above for the rare case when a key
actually turns out to be its full declared size and can't be stored.  Fine,
if you've got 18 digits of precision, use double precision and check the
record data for the details.beyond digit 16.

But that's not the best you can do.  When creating a numeric key which
represents a column having more than 16 digits, tack the remaining digits
onto the end of the key, following the double precision number.  Yes, that
will make reconstructing the value from the key slightly more complicated,
but it will sort correctly bytewise.


Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] Indexes - things to think about for V

2014-10-27 Thread Ann Harrison
If, in Firebird 4, you're going to look at indexes, I have four suggestions.


1) Indexes with large keys

An index key that's a significant fraction of the page size leads to
inefficient indexes (or infinite depth), so rules about index size aren't
just the revenge of database management system developers on database
application designers.  There's a real problem with large keys.

Back sometime around Interbase 3 -  mid-1980's - we had an engineering
showdown about indexes.  Previously, the size of an index key was checked
at runtime, so a long string or concatenated index could cause a fatal
runtime error.  After much discussion, we decided to compute the largest
possible size for an index key at definition time and reject oversized
keys.  That decision was wrong.  Not that throwing an error at runtime was
right, but we didn't think about a possible third way.

Now UTF8 makes the problem much worse because the possible size of a key is
vastly larger than the probably size of the key.  Here's a possible third
way:  Store as much of the key as fits in a size appropriate for the given
page size.  Let prefix compression take care of the unfortunate case where
the first eighty or ninety characters are the same.  When inserting, if
there's a previous versions that matches for the key length, read the
record to decide whether to insert before or after.

Yes, that's going to mean reading more records - but only when the option
is not indexing the field at all, or indexing a substring function of it -
which performs worse.  Most UTF8 characters (or whatever the correct name
for them is ... glyphs?)  are only two bytes, so mostly Firebird is
refusing to create indexes that would never be a problem.  The problem
worse in compound indexes which are treated as if each field must be
included at its full length, rather than recognizing that many have
trailing spaces that are not included in the key.


2) Compound keys

The advantage of Firebird's algorithm is that individual values can be
suffix compressed, meaning that trailing spaces and zeros (in double
precision) are removed. . There's at least one algorithm for separating the
columns in compound keys that's more efficient than the one Firebird uses.
  The one I know about puts a binary zero byte between each column.  When
the column contains a byte of zero, replace it with binary bytes 0,1.  When
the column contains binary bytes 0,1, replace it with 0,1,1.  That's enough
to stop all confusion.  There are probably better algorithms.

3) Transaction IDs in index keys

 Someone mentioned including transaction ids in the index key to avoid
reading record data when using an index that provides all needed
information.  Those situations are real - count of customers in Mexico,
junction records for M to M relationships, etc.  In some cases, two
transaction ids are required - one for the transaction that created entry
and the transaction superseded it.  That's potentially 16 bytes subtracted
from the key size.  OK, maybe not so big a problem, but it also means that
when a key changes, the change must be made in two places.  But you know
all those arguments.

Jim's going to hate this.  Might it be possible to have a second sort of
index specifically for those cases where the read efficiency outweighs the
storage and management overhead?  Yes, one more place where the application
designer can be blamed for poor performance.


 4) Numeric indexes.

In my opinion, the decision to use 64 bit integers in indexes on numerics
larger than 9 digits and quads was boneheaded.  The advantage of double
precision keys is that the scale of an integer or numeric/decimal field can
be changed without invalidating the index.  That's a significant advantage.

Interbase supported quads on VAXen from the beginning using the same sort
of approximate logic I described above for the rare case when a key
actually turns out to be its full declared size and can't be stored.  Fine,
if you've got 18 digits of precision, use double precision and check the
record data for the details.beyond digit 16.

But that's not the best you can do.  When creating a numeric key which
represents a column having more than 16 digits, tack the remaining digits
onto the end of the key, following the double precision number.  Yes, that
will make reconstructing the value from the key slightly more complicated,
but it will sort correctly bytewise.


Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Hiding source code of procedures and triggers will not work in FB 3

2014-08-29 Thread Ann Harrison
Anyone who's followed the support list for a decade or so knows that developers 
frequently ask how they can protect the source of their procedures.  And, 
likewise, that the answer is that the esssence of the procedure is the BLR and 
must be readable to be used, so the best option is to mask the problem by 
setting the source to NULL.  Beyond that, Carlos Cantu has unusual insight in 
that he offers a wealth of support and information for Firebird developers in 
Brazil, one of the countries that uses Firebird the most.  

Nothing in Firebird uses the content of RDB$SOURCE fields except the code in 
ISQL that extracts schemas.  

Setting the source to null is not secure, but it is a technique that has been 
used widely for the whole history of Firebird.  It's similar to the Java class 
obfuscators in that reverse compiling BLR isn't impossible, but it deters the 
lazy.  

That said, blocking user writes to the system tables is a good thing.  
Writeable system tables were a cute idea in the early eighties, using the 
database methods to run the database.  In the wider world (and the world is 
much wider now) writeable system tables are a disaster waiting to happen.  
However, there's a cost to change, even change for the better.  When possible, 
change should techniques that preserve current capabilities.  

How hard would it be to add clauses  [WITH [OUT] SOURCE] to CREATE and ALTER 
statements plus [DROP SOURCE] to the ALTER statements?  (Including, of course, 
RECREATE and all it's varients).   If that would hold up V3, then promise it 
for 3.01 and let developers who worry about theft wait one release.

Changing Firebird to a direct SQL engine won't be materially affected.  As Mark 
Rotteveel noted, complex system objects are likely to have two representations 
- SQL source and partially compiled - for efficiency and to allow developers to 
hide their work.

Cheers,

Ann

Don't get me started on encrypting the system tables, or I'll trot out the old 
politically incorrect story.
--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] New Interface

2014-08-12 Thread Ann Harrison

 On Aug 12, 2014, at 1:11 PM, Jim Starkey j...@jimstarkey.net wrote:
 
 
 My position is that the external interface (the API) should remain y-valve 
 and handle oriented, extended as needed.  An interface for export engine 
 semantics, however, has different requirements and can and should be 
 encapsulated as a objections.

An objection-oriented interface certainly fits the tenor of this discussion. 

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Error messages how-to

2014-08-12 Thread Ann Harrison

 On Aug 12, 2014, at 1:17 PM, Jim Starkey j...@jimstarkey.net wrote:
 
 Sigh.  There used to be database based system to create and edit messages and 
 generate header and message files.  Very handy using a database to develop a 
 database system.  

Please lets not get into the moral and political issues of recursive 
development tools!  That discussion makes the API questions look reasonable and 
productive. 

Yours for peace and harmony,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] dtype_packed

2014-08-09 Thread Ann Harrison

 On Aug 5, 2014, at 5:50 AM, Dmitry Yemanov firebi...@yandex.ru wrote:
 
 05.08.2014 12:19, Mark Rotteveel wrote:
 
 it seems that dtype_packed is also known in
 COBOL (and SAP) for a BCD (binary coded decimal).
 
 dtype_packed really seems to be a packed decimal, however it's not used 
 by Firebird since day one.

The origional API used by Interbase was DEC's compromise interface for the two 
relational databases that were developed simultaneously. The interface 
definition included not only entry points, BLR, and arguments but also major 
error codes and data types. At the time (1982) some programmers were convinced 
that you couldn't perform decimal integer arithmetic in binary. So packed 
decimal made the cut. 

But never got implemented in Interbase. 

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Meaning of incarnation

2014-07-01 Thread Ann Harrison
 
Mark,

 In various places of the Firebird wire protocol and the Firebird sources 
 the term 'incarnation' is used. What does this mean?

For cross-version compatibilty, most objects have a version number, sometimes 
called incarnation.  Objects that have not changed will be zero.   At least 
that's what it meant originally.  The header file that declares the object 
should have all versions.

Cheers,

Ann
--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Cursor stability in READ COMMITTED transactions

2014-06-27 Thread Ann Harrison


 On Jun 23, 2014, at 3:57 PM, Nikolay Samofatov 
 nikolay.samofa...@red-soft.biz wrote:
 
 Some records for a join can be read before a transaction is committed, and 
 some after. Same with EXISTS.  It can see different set of commits from the 
 one when main row was read.
 You can see partial commits in results, even inside a single row returned by 
 the query.
 Nobody is ready for this, this is CRAZY, nobody expects this. If this data is 
 used for any remotely important purpose, you will get whammed.

Right, that's the beauty of read-committed.  It's not stable.   And,  in 
Firebird, it performs no better than an isolation mode that is stable.  So why 
offer it at all?  In both Interbase and NuoDB, major customers to be insisted 
that in every database they used, they could see new data without starting a 
new transaction.  And that was the way databases behaved.  No amount of 
reference to scholarly literature could convince them in the face their 
experience.  So we put in a bad mode to close big sales.

 The fact that inconsistency shows up infrequently, under parallel load and is 
 not easily 
 reproducible, sets you up to be eventually burned.  And once burned you will 
 hesitate to trust DMBS that behaves in such ways.

And that's the way Oracle, MS SQL and others behaved at the time. 
 
 This is not normal. This is a BUG.
 
The bug is the existence of read committed.  The excuse is compatibility with 
other databases,  If they now offer cursor stabilty, then Firebird should too.  
If there's a performance cost, make it an option.  May I suggest using the word 
stability rather than sensitivity?  At least to my ear (and feeble brain) 
sensistivity doesn't suggest seeing changes in route.  Stability suggests not 
seeing concurrent changes.


Cheers,

Ann
 

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] DDL Triggers, how to retrieve type?

2014-05-23 Thread Ann Harrison
On Fri, May 23, 2014 at 3:03 AM, Paul Beach pbe...@ibphoenix.com wrote:


  - Any other suggestion?
 
  Drop dialect 1 support.
 
Allow dialect 1 to have access to BIGINT fields.


For what little it's worth,
  a) Dialect 1 did include 64bit integers at one time.  VAX's had a native
64 bit integer called a quad.
  b) When your database company forces you to rewrite your application
(losing arithmetic precision in the process) for the convenience of the
company's developers, it's time to consider the cost of moving to a
different database.

Cheers,

Ann
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabsFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] A patch to submit

2014-05-20 Thread Ann Harrison
On Mon, May 19, 2014 at 12:29 PM, Dimitry Sibiryakov s...@ibphoenix.comwrote:

 :
  Can you explain erase in place briefly?

I simply call update_in_place() with delete stub from VIO_erase() if
 the head record
 version is marked with the same transaction number. I.e. the same logic
 used as in
 VIO_modify(). It is undone as usual with VIO_backout().


To restate, if one transaction first creates a version of a record then
deletes it, you handle
it the same way you would a transaction that created a record version then
updated it - without
declaring any savepoints.

If a transaction deletes a record in which the latest version was created
by a different transaction,
you create a separate deleted stub, just as before.

Right?

Cheers,

Ann


 --
WBR, SD.


 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 Firebird-Devel mailing list, web interface at
 https://lists.sourceforge.net/lists/listinfo/firebird-devel

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabsFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] A patch to submit

2014-05-19 Thread Ann Harrison
On Mon, May 19, 2014 at 11:47 AM, Dimitry Sibiryakov s...@ibphoenix.comwrote:


 6) Implement erase-in-place which leads to significant code simplification.


Can you explain erase in place briefly?  In specific, how is it undone in
a catastrophic failure (i.e. not a transaction cleanup)?

Thanks,

Ann
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabsFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Feature request discussion (Reply to Planning the post v3 development)

2014-05-10 Thread Ann Harrison
On Sat, May 10, 2014 at 3:03 AM, Molnár Attila amol...@mve.hu wrote:




 *Optimization II. *- temporal indexing of materialization : e.g. when
 ORDER/GROUP BY has no index then currently the whole resultset is
 materialized, and the sorting moves the whole row each time. Instead of
 this it should create a temporal index on the order/group columns then
 fetching on the temporal index. In this way much less writes needed. This
 shold be applied after a treshold : common sense sais after index size/row
 size rate is smaller than 0.5.


Just to reemphasize what Vlad said earlier:

1) When data is retrieved for a sorted query, Firebird retrieves only the
sort key and any columns that are referenced, not the whole record.

2) For large record sets, Firebird uses a two-level sort.  Initially, it
uses an in-memory quick sort, moving pointers, not records.  When the
sorted data reaches the limit of the in-memory sort buffer, the data is
written to a sort temp file in the desired key order.  When all the data to
be sorted has been read, Firebird merges the data in the sort temp files.

So, no, sorting does not move the whole row each time.  Not the whole row,
not each time.

Good luck,

Ann
--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
#149; 3 signs your SCM is hindering your productivity
#149; Requirements for releasing software faster
#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforceFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] The invisible generator and the rdb$ prefix

2014-04-04 Thread Ann Harrison
On Thu, Apr 3, 2014 at 2:06 PM, Claudio Valderrama C. cva...@usa.netwrote:




 AFAIK, you can put triggers on sys tables and they last until the last
 attachment finished. When the db is loaded again, those triggers do not
 load. I don't know if that changed recently, but was this way since I
 remember.


My recollection is that some user modifications to system tables (adding
triggers
or columns) are OK, but disappear after a backup and restore with gbak.
 Gbak
doesn't look for user modifications of system tables and recreates them
from its
understanding of their state in the ODS version being created.

Changing gbak so it backs up user enhancements to the system tables is just
programming, and restoring them after the base table are created isn't
hard, but
both go against the V3 goal of shutting users out of system space.

Cheers,

Ann
--
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] MySQL versus Firebird

2014-03-11 Thread Ann Harrison
On Tue, Mar 11, 2014 at 9:47 AM, marius adrian popa map...@gmail.comwrote:

 I posted here so maybe it can help us to update the sql conformance
 page  http://www.firebirdsql.org/en/sql-conformance/

 ps: he is not quite a random guy on the internet but a quite biased at the
 end
 Software Architect at MySQL/Sun/Oracle from 2003-2011, and at HP for a
 little while after that.
 http://www.linkedin.com/pub/peter-gulutzan/b/28/761


For what it's worth, I worked with Peter Gulutzan on issues relating to the
SQL
standard, especially foreign key constraints.  He knows more about obscure
corners of reflexive constraints than anyone else I know.

Cheers,

Ann
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTechFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird db integration big-endian LibreOffice unable to open little-endian embedded firebird db

2014-01-13 Thread Ann Harrison
On Mon, Jan 13, 2014 at 3:52 PM, Andrzej Hunt andr...@ahunt.org wrote:


 Unfortunately we have no choice but to use the backup format as we
 have to have to be endian-agnostic.


It's possible to modify Firebird to be endian agnostic - I did it for a
customer
some years ago.  Works fine for metadata and structured data, but there's no
way to handle the contents of blobs. Firebird really doesn't know what's in
a blob,
so changing endianness would be a disaster.

Historically, Firebird was created aware of endian issues because it ran on
Vaxes and Intel machines.  At the time, the performance impact of converting
binary data on every reference was prohibitive.  Then for a long time,
different
endian machines ran in different shops - not many mixed Mac, Vax, and
Intel servers, so having databases of different natures was not a great
problem.
Then Apple switched endianness and the problem became slightly interesting
again.

Cheers,

Ann
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Some aspects of the optimizer hints

2014-01-05 Thread Ann Harrison
Alex,


  Furthermore, despite the everyone's instinct, it's a good deal faster in
  the general case to read a table in an optimal order and sort the data
  in memory that to read the data in index order - random order relative
  to storage.
 

 Ann, from server POV you are definitely right. But when talking about
 ways to get first record from server as fast as possible we care about
 not server-only performance, but about performance of overall
 client-server system.



When the server is bogged down running queries inefficiently, the overall
system performance does suffer.  However, if all you want is the first few
records, reading hundreds of thousands and sorting them is wasteful - and
hard on the client.  That's why Firebird's optimizer handles a restriction
on
the number of rows to be returned as an optimizer hint to navigate through
the index, rather than doing a natural read and sorting the results.




 We have very simple table:

 create table MOVIES
(NAME varchar(255) primary key,
 COMMENTS varchar(8192),
 ISO_IMAGE blob);


(A good example of the problems with natural primary keys.)

And want to run such a query:

 select * from MOVIES where COMMENTS like '%yacht%' order by NAME;

 Two plans are possible:
 PLAN SORT (MOVIES NATURAL)
 PLAN (MOVIES ORDER RDB$PRIMARY1)


Right.  And if you want to give the optimizer a hint that it should choose
the second plan, change the query like this:

select first 100  * from MOVIES where COMMENTS like '%yacht%  order by
NAME;

If you think your client may want more than a million rows, increase that
number.
As far as I know, Firebird's optimizer doesn't do anything clever like
trying to guess
what part of the table the first n represents - ask it for the first 1
or the first
ten million, it will still choose the navigational path.  So ask it for ten
times as many
records as you expect to find.  Probably a bad idea to exceed the size of
an int64.


 Cause we need to scan all the table first plan appears to be better - it
 requires less disk operations even taking into an account sort after
 scanning all the table. But overall throughput depends upon what are we
 going to do with ISO_IMAGE at client side. Imagine we want to burn DVDs
 with all images (and have the set of DVDs sorted after burning).


Overall throughput also depends on what else is going on with the system -
if you've got lots of read queries that are hitting the disk hard, making
your
burn program wait a few seconds for its first results could improve the
performance of the system overall.  But that's moot.  You have the ability
to ask for navigational access through the index, if that's what you think
you need.



 Certainly this does not mean that for all queries, containing blobs,
 natural scan should not be used :-)


And again, for the naive, the blobs aren't accessed until the records are
read, qualified by the comment containing 'yacht' - in this case, the non-
standard CONTAINING would be better because it is not case sensitive.


 Therefore I agree with Dmitry - such a hint to
 optimizer is required part of SELECT statement if we want to have
 optimal performance for all the system, not for server only.


And I guess I agree also, but think that the hint is already part of
Firebird - as either FIRST or LIMIT or ROWS or whatever other silly
syntax exists for restricting the number of records returned.  Remember,
the hint is needed only when the query is sorted and there is an index
with the same keys in the same order and direction as the sort and
that index can be used.


 (Telling
 true I do not understand why _THIS_ kind of hint is not part of SQL
 standard - may be people who deal with standard look at the world from
 server-only POV?)


No, the standards organization attempts to restrict SQL to the logical
description of database structure and manipulation.  That's why CREATE
INDEX is not a standard statement.

Cheers,

Ann



--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] FWD: Re: Compatibility FB3.0 boolean and Interbase

2013-11-14 Thread Ann Harrison
On Wed, Nov 13, 2013 at 4:01 AM, Dimitry Sibiryakov s...@ibphoenix.comwrote:

 13.11.2013 9:17, Alex wrote:
  For me main problem is that we do not know format of interbase messages
  when boolean is used in them. We do not know alignment rules. We know
  _nothing_  about internals of boolean implementation in interbase.

Fortunately, we don't need to know all this to make boolean compatible
 from API POV.
 Client library will fill sqllen automatically, and even if rules differ,
 right application
 will handle this difference well.


Frankly, I don't see the problem with defining 590 to be boolean in
addition to the
Firebird specific boolean identifier.  You wouldn't be guaranteeing
compatibility with
InterBase - that was abandoned in 2000.  But at the same time, you wouldn't
be
blocking it arbitrarily.  Those who want to walk on the wild side and mix
tools from
databases developed by different, non-communicating organizations should
expect
problems.  When those problems occur, neither database is at fault.  But
why put
in arbitrary roadblocks?

Maybe as a compromise Firebird could agree not to use 590 for anything but
boolean -
not saying that it be defined as boolean, but put a commit in the header
file saying that
590 is reserved for non-use.

Cheers,

Ann
--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Record Encoding, was Unicode UTF-16 etc

2013-09-03 Thread Ann Harrison
On Tue, Sep 3, 2013 at 5:04 AM, Dimitry Sibiryakov s...@ibphoenix.com wrote:


What problem do you foresee?
AFAIK, ccess to single field values is already incapsulated in record
 class, so string
 buffer in DSC can be replaced with pointer without hacking whole engine.
 So, only SQZ
 module should be changed to feed data in a little more clever way.


In fact, the testing done with Netfrastructure (aka Falcon) and NuoDB
showed that just
storing values with the encoding Jim described) reduced the overall record
size by 30%
compared with the run length encoding done by SQZ.  The major gain with SQZ
is eliminating
trailing blanks and compressing a series of numeric columns with the value
zero.

So, it might be possible to make SQZ a lot dumber.

Cheers,

Ann
--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrkFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Unicode UTF-16 etc

2013-08-31 Thread Ann Harrison


On Aug 31, 2013, at 4:55 AM, Mark Rotteveel m...@lawinegevaar.nl wrote:

 On 29-8-2013 17:41, Jim Starkey wrote:
 Paradoxically, Japanese strings tend to be shorter in UTF-8 than 16 bit
 Unicode.  The reason is simple: There are enough single byte characters
 -- punctuation, control characters, and digits -- stay as single bytes,
 double byte characters are a wash, and the single byte characters
 generally balance the number of three byte characters.
 
 UTF-16 is a mess with nasty problems of endians, multi-word characters,
 and illegal codepoints to worry about.
 
 
 Unfortunately the implementation of UTF-8 in Firebird is annoying 
 because it reduces that maximum allowed number of characters to a 1/4 of 
 that for single byte character sets making it necessary to switch to 
 blobs sooner.

A better solution is to change the implementation of CHAR and VARCHAR to accept 
longer strings.   

Cheers,

Ann



 l

--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-4201) computed field has null value inside BI trigger

2013-08-31 Thread Ann Harrison
I think this is correct - if unexplicable - behavior according to the Standard. 
 Something about the state of the coulmn prior to the operation. 

On Aug 31, 2013, at 8:13 AM, Gorynich (JIRA) trac...@firebirdsql.org wrote:

 computed field has null value inside BI trigger
 ---
 
 Key: CORE-4201
 URL: http://tracker.firebirdsql.org/browse/CORE-4201
 Project: Firebird Core
  Issue Type: Bug
  Components: Engine
Affects Versions: 3.0 Alpha 1, 2.5.3
Reporter: Gorynich
 
 
 CREATE TABLE NEW_TABLE (
NEW_FIELD1  INTEGER NOT NULL,
COMP_FIELD COMPUTED BY (NEW_FIELD1+1),
NEW_FIELD2  INTEGER
 );
 
 SET TERM ^ ;
 
 CREATE TRIGGER NEW_TABLE_BI0 FOR NEW_TABLE
 ACTIVE BEFORE INSERT POSITION 0
 as
 begin
  new.New_Field2 = new.Comp_Field;
 end
 ^
 SET TERM ; ^
 
 
 INSERT INTO NEW_TABLE (NEW_FIELD1)
 VALUES (1);
 
 NEW_FIELD1 - 1
 COMP_FIELD - 2
 NEW_FIELD2 - null ???  why ?
 
 Firebird-2.5.2.26540
 
 NEW_FIELD1 - 1
 COMP_FIELD - 2
 NEW_FIELD2 - 2   OK :)
 
 
 -- 
 This message is automatically generated by JIRA.
 -
 If you think it was sent incorrectly contact one of the administrators: 
 http://tracker.firebirdsql.org/secure/Administrators.jspa
 -
 For more information on JIRA, see: http://www.atlassian.com/software/jira
 
 
 
 --
 Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
 Discover the easy way to master current and previous Microsoft technologies
 and advance your career. Get an incredible 1,500+ hours of step-by-step
 tutorial videos with LearnDevNow. Subscribe today and save!
 http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk
 Firebird-Devel mailing list, web interface at 
 https://lists.sourceforge.net/lists/listinfo/firebird-devel

--
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [FB-Tracker] Created: (CORE-4190) Wring value of data pages average fill percent in GSTAT in case of storing varchars that much longer than page size

2013-08-22 Thread Ann Harrison
On Aug 22, 2013, at 7:37 AM, Pavel Zotov (JIRA) trac...@firebirdsql.org 
wrote:

 Wrong value of data pages average fill percent in GSTAT in case of storing 
 varchars that much longer than page size
 ---
 
 ...
Page size4096
ODS version12.0
 ...
 T (128)
Primary pointer page: 198, Index root page: 199
Total formats: 1, used formats: 1
Average record length: 33037.00, total records: 1
Average version length: 0.00, total versions: 0, max versions: 0
Average fragment length: 4046.00, total fragments: 8, max fragments: 8
Average unpacked length: 32771.00, compression ratio: 0.99
Pointer pages: 1, data page slots: 1
Data pages: 1, average fill: 17%
Primary pages: 1, full pages: 0, swept pages: 0
Big record pages: 8
 ...

Sorry to respond to the list - I'm on my iPhone and don't remember my 
SourceForge password. 

This is not a bug.  Overflow (Big Record) pages are not data pages and are 
always filled 100%.  The one data page contains the record header plus whatever 
data was left over when the last overflow page was filled. Big data is written 
back to front so the left over bit is the beginning of the string. 


 
 
 The value: average fill: 17% - is wrong: long string was splitted on 8 DP 
 and occupies in each of them almost 100% of place because of  poor 
 compressing ratio (it was formed via gen_uuid()  so RLE algorithm can not 
 compress such data).

--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: human-readable DBKEY

2013-04-06 Thread Ann Harrison
Dmitry,

 True, db_keys from aggregate views are problematic, but not for simple
  joined views.

 Correct and it hasn't changed. I just meant that the view's DBKEY is not
 something separate, it simply a concatenation of the individual tables
 DBKEYs. You cannot select from a joined view using its combined DBKEY,
 you have to decode it into sub-parts and then use a particular table's
 DBKEY for retrieval. As long as we speak about identifying and locating
 *records*, IMHO it makes a lot of sense to forget about views and work
 with tables only.


Your choice.  I vaguely recollect having used view db_key values in
debugging
something, but that sort of bug is no doubt gone, and besides, the values
are
still there.

The advantage of a function over conversion to int64 is that it can handle
changes
should the db_key ever need to change..  It could also format the DB_KEY in
its
components - pointer page number, offset on pointer page, offset in data
page index,
giving the whole thing some meaning.

Cheers,

Ann
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.htmlFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: human-readable DBKEY

2013-04-05 Thread Ann Harrison
  Unfortunately, DBKEY has variable size and not always fit into int64.

 It has a fixed size and its recno part (leaving the relation id aside)
 always fits into int64.

 It's currently fixed at 8 bytes for simple tables and 8 bytes * number of
streams
for views.  I like the function approach.

Cheers,

Ann
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.htmlFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: non-expandable fields

2013-04-05 Thread Ann Harrison
On Fri, Apr 5, 2013 at 11:34 AM, Doug Chamberlin
chamberlin.d...@gmail.comwrote:


 I would implement it so that if a user does not have SELECT permission on
 a field that any mention of that field in a SELECT statement is an outright
 error for that user. Just as if the field did not exist.

 I think that's the intention of the standard, but like Dmitry, I have been
unable to find a clear statement to that effect.  If it matters to anybody,
 I've got a couple of friends who are serious standards addicts and I could
ask for a reference there.  The Red Database approach seems a bit dicey -
having a program return different results depending on who runs it...
 especially if the program expects a specific shape for a table.

Cheers,

Ann
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.htmlFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] RFC: non-expandable fields

2013-04-04 Thread Ann Harrison
On Thu, Apr 4, 2013 at 3:29 PM, Sergey Mereutsa s...@dqteam.com wrote:

 Hello Vlad and all,

 IMHO, the easiest way to implement this is to make all fields with
 prefix RDB$ (or whatever) hidden by default. Untill you do not
 address to those fields directly - they are ignored by the engine,
 when data is fetched.


There's a perfectly good flag field in RDB$RELATION_FIELDS.  No
reason to make dependencies on the field name.  I'm not going to
opine on the value of the feature, just don't make it depend on a
naming artifact.

Cheers.

Ann
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.htmlFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] API data types

2013-03-20 Thread Ann Harrison
Dmitry,


 1) If we ever face a platform with sizeof(int) == 8 (IA64? anything
 else?) would it be OK that our API becomes platform-specific in regard
 to its ABI? Or was it exactly the goal? IIRC, we had discussed using
 types from stdint.h instead, but I don't remember any decision.


When there were actually platform-specific types, the goal was to make the
API platform independent. That was a significant technical advantage when
the world was moving from mostly little endian to big endian, DEC had its
own floating point type, and the Cray (yes there was a Cray port) had
sixty-four bit integers, sixty-four bit bytes, and 64-bit characters.

If it's not too technically difficult, maintaining a platform-independent
API may help Firebird succeed as new processor families mature to the point
of being useful for database applications.

Cheers,

Ann
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_marFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [firebird-support] Re: Sweep process too many writes to disc

2013-01-09 Thread Ann Harrison
Sean,


 To be clear, you are saying that if a row as an index on Field A which has
 4 record versions (3 which can be dropped), and the value of the index for
 those versions where 1, 2, 3 and 4.  When the sweep encounters the row,
 when it scrubs version 1, if reads the index and tries to find the
 version 1 entry and clean it at the same time.  And so on. Correct?


It's more complicated than that. The relationship between record versions
and index entries is not 1:1. If the index key doesn't change when a record
is updated, there's no change to the index.  That's a huge win for stable
indexes like primary keys.  Furthermore, if a key value starts as A for
example, and is modified to B, then modified back to A, then back to
B again (4 record versions), the record will have two index entries, one
for A and one for B.  So the garbage collector builds a list of
staying values and going values.  In the case of the A's and B's, if
only the two oldest record versions were garbage collected, then there
would be no change to the index.  If all three old versions went, then the
A entry would be removed.

What that means is that the garbage collector has to access all versions of
a record to decide which index entries can be removed, so there's no way to
clean the records first, then go back to get the index entries.  That's
particularly true since someone could have modified the record again,
during the sweep, and changed the key value back to A.  There's no way to
know that the A is new without looking at the record version chain.

And to whoever speculated that sweep would block all other access, no,
that's no more true than any other application that scans the database.
 Sweep is just a normal low-level API application that relies on normal
Firebird record and page handling, including (especially) cooperative
garbage collection.  Everything will be slower given that there's a process
reading the whole database, but there should be no long blockages.

Cheers,

Ann
--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [firebird-support] Re: Sweep process too many writes to disc

2013-01-05 Thread Ann Harrison
On Sat, Jan 5, 2013 at 5:35 AM, Dimitry Sibiryakov s...@ibphoenix.com wrote:


A dumb question: when touched pages are flushed to disk?

 a) after each single version removal
 b) after all versions of a record removal
 c) after all versions of all records on a data page removal
 d) after complete garbage collection for all records in list.


There's nothing magic about sweep.  Unlike gstat and gfix, it uses the C
API (not DSQL) to access the database.  It's I/O is just like a normal
application, meaning that pages are written depending on the state of the
cache.  Sweep, like any application, has no control over writes except that
its changes will go to disk when it commits.

Cheers,

Ann
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122912Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [firebird-support] Re: Sweep process too many writes to disc

2013-01-05 Thread Ann Harrison
On Sat, Jan 5, 2013 at 6:12 AM, Dimitry Sibiryakov s...@ibphoenix.com wrote:

 05.01.2013 11:59, Vlad Khorsun wrote:
  In general, pages are written as result of
  a) page cache preemption (when new page should be read into cache
 and least recently used page is dirty)
  b) precedence writes (when some dirty page is about to be written
 it forced writes of all pages it dependent on)
  c) another attachment in CS\SC asks our attachment to release the
 page lock we own and page is dirty in our local cache
  d) flush at commit\rollback\end_of_sweep\detach

So, if I understand correctly, if there is a lot of garbage in
 database, sweep with GC
 touch a lot of pages and almost any activity in parallel connections cause
 massive page
 flush. Exactly the behavior observed by TS.


Err, no on three counts.  The first is that there was no other activity on
the database that demonstrated 1Tb of writes for a 55Gb database.  The
second is that even if there were conflicts, that sweep was done by a
SuperServer configuration, so dirty pages do not go to disk when their lock
is released.  And finally, even if there were conflicts and it were a
Classic installation, the only time large numbers of pages are forced at
once is on commit.  Conflicts are written page by page with the conflicting
process waiting for the write to complete before proceeding.

Best regards,

Ann
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122912Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Sweep process too many writes to disc

2013-01-03 Thread Ann Harrison
On Thu, Jan 3, 2013 at 4:22 PM, Karol Bieniaszewski wrote:


 i have problem with understanding internal work of sweep
 my db size is 52.74GB i detach from db restart fb server and run sweep by
 gfix -sweep.  now i see in task manager that Firebird 2.5.3.26584 write to
 disk 1154 GB
 this is 21 times bigger then db size itself!

 In my point of view sweep should do:


Err, the code is really definitive, regardless of what you think it should
do


 1. scan every db pages and take info about record versions from
 transactions
 between Oldest Transaction and Oldest Active Transaction


Why scan?  Why not read and clean up at the same time?  Unless things have
changed, or my memory is failing (both possible), what sweep (like every
other transaction in cooperative garbage collection mode) does is read the
header page to determine the oldest transaction that was active when the
currently oldest transaction started.  Any record that has a version that
age and versions older than that can be cleaned up.  Then the sweep starts
reading every record in every table, starting with the first table in
RDB$RELATIONS and reading records in storage order.  The read forces
garbage collection.

So, this is not a mark and sweep sort of sweep, but a single sweep.
 However, back versions that are stored on different pages may complicate
the sweep.


 2. after first steep, sweep should process old record versions by garbage
 collector work and then


See above.


 3. progress Oldest Transaction to value equal Oldest Active Transaction
 from
 time when sweep start.


Right, or at least close.  If there happens to be an old transaction stuck
in the first phase of a two phase commit, that's going to be the new Oldest.


 Garbage collector should work in page lock mode and when all info from page
 will be proccesed then if some old record versions was removed write whole
 page to hard drive.


The sweep process holds a write lock on each data page while it is making
changes to it.  If other transactions request a lock, the sweeper will
release the page once it is internally consistent.  That's the case for all



 Can you tell me what I omitted?


Ah, one possibility is that you have fragmented records or records with
back versions on different pages.  The way all garbage collection (sweep,
cooperative, or special thread) works is that most recent version of a
record is stored in a known location.  Back versions are located by
following a pointer from the newest to the next older and from that to the
next older, etc.   Fragmented record versions work much the same way.  If
you have not left free space on pages, or if you've done a lot of updates
to a single record, the newest version may not be on the same page with the
older versions.  Worse, the old versions may be on different pages.

So, although the sweep is reading the first record from your table, its
back versions may all be on different pages.

Sweep work in this way or such different? If yes i should see many reads and
 only few writes to disk but this not happend. I see 1TB writes to disc
 for db size 52.74GB.


That seems extreme, but leads to several questions.

One is whether your application regularly makes large changes to the same
record, or several changes in the same transaction.  Back versions are
normally stored as differences from the next newer version, so they're
usually small.  However, if you change more than 255 bytes in a record, or
if you change the record more than once in a transaction, the whole back
version is stored - generally much larger than a difference records and
therefore more likely to go off page.

Another is what cache size you give the sweep.  If you've got back versions
off page, then you'll need a larger cache to keep from writing the same
page over and over.

A third is what other processing is going on simultaneously.  Sweep does
not hold an exclusive lock on a page while it scrubs the entire page, only
for long enough to make the page consistent.  If other transactions need
the page, it will be released - and at this point the number of writes
varies depending on whether you're running one of the Classics or
SuperServer.  In neither case is having sweep compete with update
transactions a good thing. Necessary sometimes, but not performance
enhancing.

Sorry I didn't pick this one up in Support.  This has been a somewhat
harried time for me.  I've copied the support list because the information
is more appropriate there.

Good luck,

Ann


 P.S. you might think a mark and sweep might be more efficient, and there
might be a solution there, but it would require a completely different sort
of sweeper and a few bits that were not available.  My first five thoughts
almost certainly increase the number of writes.

Sweep works through the normal database interface - it's just a program
that reads the database, using cooperative garbage collection so it cleans
up old versions immediately.  Its one clever trick is knowing how to reset
the 

Re: [Firebird-devel] Firebird 3, time to rename conflict names ?

2012-11-19 Thread Ann Harrison
On Sat, Nov 17, 2012 at 3:36 AM, Dmitry Yemanov firebi...@yandex.ru wrote:



 gbak - fb_dump
 nbackup - fb_backup

 because IMO it better reflects their goals.


Gbak is not equivalent to the MySQL or Postgres dumps which produce a
series of insert statements that can be edited.  From my experience, people
expect textual output from a dump and will be disappointed/annoyed to get
our backup file.

Cheers,

Ann
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_novFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird 3, time to rename conflict names ?

2012-11-19 Thread Ann Harrison
On Mon, Nov 19, 2012 at 11:41 AM, Alex Peshkoff peshk...@mail.ru wrote:


 What can be said for sure - a series of insert statements is definitely
 not optimized for size, but certainly well compressable.


The more serious problem is that each insert statement has to be compiled
and optimized - none of the dump tools use parameterized  statements.

Cheers,

Ann
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_novFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird 3, time to rename conflict names ?

2012-11-19 Thread Ann Harrison
On Mon, Nov 19, 2012 at 11:51 AM, Dalton Calford
dalton.calf...@gmail.comwrote:

 Most hex encoded dumps have special programs to load the data back
 into the engine - I could not imagine anyone trying to use a straight
 sql script to handle any large datasets.


I assure you that MySQL dump produces a text file containing insert
statements.

Best regards,

Ann
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_novFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Database dialect and BIGINT in metadata

2012-11-01 Thread Ann Harrison
On Thu, Nov 1, 2012 at 9:32 AM, Dimitry Sibiryakov s...@ibphoenix.com wrote:



Way to nowhere. No matter how long new datatype is, 1/3 won't be
 precise.


1/3 is precise in base 6, though of course 1/5 isn't.  And frankly, double
precision
doesn't help much either since it can't represent 1/3 (base 10) precisely.

3fd5   16   ≈ 1/3


Sorry, I'm getting into angels on the head of a pin mode.

Cheers,

Ann
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_octFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Database dialect and BIGINT in metadata

2012-10-31 Thread Ann Harrison
On Wed, Oct 31, 2012 at 6:51 AM, Mark Rotteveel m...@lawinegevaar.nlwrote:


  Also didn't Firebird internally already have 64 bit fields (eg
 DOUBLE, ISC_QUAD), or are all those also artefacts of dialect 3?


InterBase was developed on MicroVaxen which had a 64-bit integer datatype.
 So from
V1, there was support for what was called  QUAD.  Contemporary Intel and
Motorola
processors did not support the type, so it was dropped for those versions.

While adding features to dialect 1 seems absurd, I think you'll find that
some major
supporters of Firebird are still running dialect 1 databases because they
maintain
internal precision during arithmetic better than dialect 3.  For reasons
beyond me the
Borland developers felt that the precision of the input and output
parameters had to
constrain the precision during a computation - leading to lots of dropped
bits that
could have been preserved.

Cheers,

Ann
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_octFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Index and INNER JOIN

2012-08-13 Thread Ann Harrison
On Mon, Aug 13, 2012 at 11:15 AM, arnaud le roy sdnetw...@gmail.com wrote:


 it's normal that only the indexes on one table is used during an inner
 join ?


No, but this is a support question, not a developer question, and should be
sent to firebird-supp...@yahoogroups.com.  When you send your question
there, please include the plan that Firebird generated and the approximate
number of records in each table.

Good luck,

Ann
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Raising the BLR level

2012-04-05 Thread Ann Harrison
On Mon, Mar 5, 2012 at 6:25 PM, Claudio Valderrama C. cva...@usa.net
 wrote:

 (Thorny issue, I hope Ann Harrison will comment.)


And now, finally, a month later she does.  Included after my signature is a
bit of MySQL code which may explain my desire to keep things as simple as
possible.


Hello, currently the engine supports BLR4 (legacy) and BLR 5. All FB
 versions generate BLR 5. But we are hitting some limits and I think we
 should increase it again (this would be for the first time for FB).


My understanding and memory is that within a blr_rse, each stream has a
context which is expressed as a single byte.  My first reaction was to
suggest adding a blr_rse2 using sixteen bit contexts.  Obviously, even if
that didn't trigger a BLR6 it would not be backward compatible, but there
are other extensions to blr that haven't caused a version change and are
also not backward compatible.

Forward compatibility (i.e. the ability of a new engine to handle an old
database) is critically important.  If someone restores an old database, it
may well have significant amounts of old blr in it.  People really dislike
a database that won't let them at their data - even if the data is old.

Claudio has proposed increasing the blr version and changing those routines
that manage blr to recognize that BLR4 and BLR5 have eight bit contexts but
BLR6 has sixteen bit contexts. That's at least as clean as recognizing that
blr_rse has eight bit contexts while blr_rse2 has sixteen bit contexts.

What bothered me was the follow-on idea of fixing lots of housekeeping
issues



 Things we can do in the new BLR version:
 - enlarge some values that are currently held in a single bit


Which means testing for blr version on every reference to them.


 - allow for reuse of holes in the BLR namespace without risk of
 misinterpreting a deprecated verb


Which means testing for blr version on every reference to them.


 - allow for BLR streams bigger than 64K thus supporting procedure BLR that
 will be stored in multiple blob segments if necessary (AFAIK, gbak is
 prepared to handle that).


Fixing the problem that gbak always gets or puts blr in a single read or
write is something I had on my list to do since the first
gds-Galaxy release.  I hope it's finally been done.



 Also, somewhat related to this, I propose that for 64-bit FB, the limit
 MAX_REQUESTS_SIZE should be raised, too or estimated on the fly or put in
 the config file.


That would also be a good thing.

Best wishes,

Ann


Here's the code ...


int
mysql_execute_command(THD *thd)
{
 int res= FALSE;
 bool need_start_waiting= FALSE; // have protection against global read lock
 int  up_result= 0;
 LEX  *lex= thd-lex;
 /* first SELECT_LEX (have special meaning for many of non-SELECTcommands)
*/
 SELECT_LEX *select_lex= lex-select_lex;
 /* first table of first SELECT_LEX */
 TABLE_LIST *first_table= (TABLE_LIST*) select_lex-table_list.first;
 /* list of all tables in query */
 TABLE_LIST *all_tables;
 /* most outer SELECT_LEX_UNIT of query */
 SELECT_LEX_UNIT *unit= lex-unit;
#ifdef HAVE_REPLICATION
 /* have table map for update for multi-update statement (BUG#37051) */
 bool have_table_map_for_update= FALSE;
#endif
 /* Saved variable value */
 DBUG_ENTER(mysql_execute_**command);
#ifdef WITH_PARTITION_STORAGE_ENGINE
 thd-work_part_info= 0;
#endif

 /*
   In many cases first table of main SELECT_LEX have special meaning =
   check that it is first table in global list and relink it first in
   queries_tables list if it is necessary (we need such relinking only
   for queries with subqueries in select list, in this case tables of
   subqueries will go to global list first)

   all_tables will differ from first_table only if most upper SELECT_LEX
   do not contain tables.

   Because of above in place where should be at least one table in most
   outer SELECT_LEX we have following check:
   DBUG_ASSERT(first_table == all_tables);
   DBUG_ASSERT(first_table == all_tables  first_table != 0);
 */
 lex-first_lists_tables_same()**;
 /* should be assigned after making first tables same */
 all_tables= lex-query_tables;
 /* set context for commands which do not use setup_tables */
 select_lex-
   context.resolve_in_table_list_**only((TABLE_LIST*)select_lex-
  table_list.first);

 /*
   Reset warning count for each query that uses tables
   A better approach would be to reset this for any commands
   that is not a SHOW command or a select that only access local
   variables, but for now this is probably good enough.
 */
 if ((sql_command_flags[lex-sql_**command]  CF_DIAGNOSTIC_STMT) != 0)
   thd-warning_info-set_read_**only(TRUE);
 else
 {
   thd-warning_info-set_read_**only(FALSE);
   if (all_tables)
 thd-warning_info-opt_clear_**warning_info(thd-query_id);
 }

#ifdef HAVE_REPLICATION
 if (unlikely(thd-slave_thread))
 {
   if (lex-sql_command == SQLCOM_DROP_TRIGGER)
   {
 /*
   When dropping a trigger, we need to load its table name

Re: [Firebird-devel] tool for encrypting database initially (and probably decrypting it)

2012-04-04 Thread Ann Harrison
On Wed, Apr 4, 2012 at 6:42 AM, Dmitry Yemanov firebi...@yandex.ru wrote:


  2. gfix -encryptplugin  {-cryptparparameter} database
  gfix passes plugin name and parameter in DPB, the rest of activity are
  like in database validation. This implementation looks like most simple
  to implement.

 No DPB hackery, please. GFIX could finally start doing something itself.
 For example, run ALTER DATABASE ENCRYPT asynchronously and show the
 progress (if requested) via querying the last encrypted pageno from the
 header. But anyway, this should be the secondary option, SQL is expected
 to be a primary tool for this task.


The design philosophy of InterBase  was that our tools would not have any
secret
hooks into the engine so anyone could write tools that worked against the
database.
The DPB links for validating a database, etc. are not hackery in the sense
that they
were quick fixes that went contrary to the architecture.  Gfix does a bit
of work itself
in resolving a two-phase commit, but the information is available to all
users. Gstat
is an obvious counter example - it reads the database without going through
the
engine.

And if I may put in one word about the encryption strategy, let me say
on-line.
Slow performance beats no performance.

Best wishes,

Ann
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2devFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] tool for encrypting database initially (and probably decrypting it)

2012-04-04 Thread Ann Harrison
On Wed, Apr 4, 2012 at 8:26 AM, Kjell Rilbe kjell.ri...@datadia.se wrote:


 OK, but that doesn't change what its current name seems to imply.


And it is the tool we have that fixes databases - with the mend option.  Not
that I'd use it if I had a choice like IBSurgeon, but ...

Ann
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2devFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Raising the BLR level

2012-03-06 Thread Ann Harrison
Having started this discussion by agreeing with Claudio, now let me suggest
that I was probably wrong. I'll think about it a bit more, but finding a
way of extending blr compatibly seems like a much better idea.  That lets
old databases continue to work and avoids the whole discussion of what
other features to drop.   More tomorrow.

Ann
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2dFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Raising the BLR level

2012-03-05 Thread Ann Harrison
Claudio,

Hello, currently the engine supports BLR4 (legacy) and BLR 5. All FB
 versions generate BLR 5. But we are hitting some limits and I think we
 should increase it again (this would be for the first time for FB). Dmitry
 asked me to get rid of the 255 streams limit but what I did is only the
 starting point.

 Problematic places that I marked in the code:

 ExprNodes.cpp:
// CVC: bottleneck
const StreamType streamCount = csb-csb_blr_reader.getByte();

for (StreamType i = 0; i  streamCount; ++i)
{
const USHORT n = csb-csb_blr_reader.getByte();
node-internalStreamList.add(csb-csb_rpt[n].csb_stream);
}

 Number of streams is limited to 255, despite me lifting the restrictions in
 other places.

 Again, ExprNodes.cpp, this looks like the complementary part:
// bottleneck
fb_assert(stack.object()-ctx_context = MAX_UCHAR);
dsqlScratch-appendUChar(stack.object()-ctx_context);

 RecordSourceNodes.cpp
// bottleneck
int count = (unsigned int) csb-csb_blr_reader.getByte();
// Pick up the sub-RseNode's and maps.
while (--count = 0)

 There may be other places I'm not aware of. The important idea is that BLR
 is expected to hold those values in single bytes and this is not enough
 anymore. I see raising the BLR version as the only solution.


I think you're right.

Ann
--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird Transaction ID limit solution - Email found in subject

2012-01-03 Thread Ann Harrison
Sean,

 The problem is not downtime is how much downtime. Backup and restore is
 so much downtime.

 There are a couple of possible solutions which would reduce the downtime;
 - a new backup/restore tool which would use multiple readers/writers to 
 minimize execution time,

Here we're talking about a logical backup that can be used to restart
transaction numbers.  Record numbers are based loosely on record
storage location.  Since a logical backup/restore changes storage
location and thus record numbers and indexes link values to record
numbers, indexes must be recreated.

The problem with a multi-threaded logical backup is that all the
threads contend for the same I/O bandwidth and possibly the same CPU
time.  Much of the restore time is spent sorting keys to recreate
indexes and multiple threads would contend for the same temporary disk
I/O.


 - a data port utility which would allow for data to be ported from a live 
 database to a new database while live is active but would need a finalization 
 step where the live database is shutdown to apply the final data changes and 
 add FK constraints.

It's not immediately obvious to me how that sort of backup/restore
could reset transaction numbers.

 There are, however, certain realities which cannot be overcome; disk 
 throughput/IO performance.



--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird Transaction ID limit solution

2012-01-03 Thread Ann Harrison
Dimitry,

 - a data port utility which would allow for data to be ported from a live 
 database to a new database while live is active but would need a 
 finalization step where the live database is shutdown to apply the final 
 data changes and add FK constraints.

   And exactly this utility is called replicator. If made right, it doesn't 
 need FK
 deactivation and can do finalization step when new database is already in 
 use.
   Aren't you tired inventing a wheel?..

Different vehicles need different wheels. The wheels on my bicycle
wouldn't do at all for a cog-railway and cog-railway wheels work very
badly on airplanes.  Airplane wheels are no use at all in a
grandfather clock.  Engineering is all about creating new wheels.
Right now, what we're looking for is a wheel that can reset
transaction ids.  I'm not sure that either replication or the
mechanism Sean is proposing (similar to either the start of a shadow
database or nbackup) can solve the overflowing transaction id problem.

Cheers,

Ann

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird Transaction ID limit solution

2012-01-03 Thread Ann Harrison
Woody,

 Maybe I'm a little dense, (probably :), but doesn't FB already know what the
 oldest interesting transaction id is? Why couldn't transaction numbers be
 allowed to wrap back around up to that point? As long as transactions are
 committed at some point, the oldest transaction would move and it would
 solve most problems being run into now.

The oldest interesting transaction is the oldest on that is not known
to be committed.  If  the oldest interesting transaction is 34667, and
you're transaction 55778, you know that anything created by
transaction 123 is certain to be committed.

Now lets assume that you're transaction 4294967000 and the oldest
interesting transaction was 429400 when you started.  (Probably
ought to mention that (IIRC) a transaction picks up the value of its
the oldest interesting on startup).  Then the transaction counter
rolls around and some new transaction 3 starts creating new
versions...  You know they're committed, so you read the new data.

More generally, how do you know the difference between the old
transaction 3 record versions which you do need to read and new new
transaction 3 records that you don't want to read?

 I will accept any and all ridicule if this seems idiotic ...

Not at all idiotic.  This stuff is complicated.


Cheers,

Ann

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Meaning of RDB$RELATION_FIELDS.RDB$UPDATE_FLAG

2012-01-03 Thread Ann Harrison
Mark,

 I am currently going over the JDBC metadata returned by Jaybird, and I
 was looking for a way to see if a column is COMPUTER BY / GENERATED
 ALWAYS AS.

 I found that I should probably look at RDB$FIELDS.RDB$COMPUTED_BLR or
 RDB$COMPUTED_SOURCE for this, but I noticed that
 RDB$RELATION_FIELDS.RDB$UPDATE_FLAG is 0 for a computed column, while it
 is 1 for 'normal' fields.

The definitive sign of a computed field is RDB$COMPUTED_BLR in either
RDB$FIELDS or RDB$DOMAINS if the field is defined through a domain.
The computed source is kept for the convenience of utilities that
recreate the DDL, but can be deleted without changing the semantics of
the field.  I wouldn't rely on the update flag which could be used in
the future for other types of fields that can't be modified.


Best regards,

Ann

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] [SPAM 5] Re: Firebird Transaction ID limit solution

2012-01-03 Thread Ann Harrison
Kjell,



 Or a more automated and built-in support to do such a
 replicate/backup/restore/reverse. For me it's question of time. Sure, I
 could learn how to setup a cluster and replication. But there are dozens
 of other things I also need to do yesterday, so having to learn this on
 top of everything else is a stumbling block.

 Could the procedure be packaged into some kind of utility program or
 something?

The short answer is probably just No.   Could someone build a robot
that would identify a flat tire,  take your spare tire out of your
trunk, jack up your car, remove the flat, put on the spare, lower the
care, and put the flat tire back in the trunk?  Probably.  Would it be
easier than learning to change a tire?  Somewhat unlikely.   On a
heavily loaded system, the replicated database (replicant in my
jargon) can't share a disk and set of set of cpu's with the primary
database.  (That's the trunk part of the analogy.)  Once established,
the replicant has to create a foundation copy of the primary database
(jacking up the car), then process updates until it's approximately
current current with the primary database (removing the old tire),
then initiate a backup/restore, wait for the restore to complete
successfully(install the new tire), swap in the newly created database
and catch up to the primary again (lower the car).  Finally, once the
newly restored replicant is absolutely current, the system must
quiesce for a few seconds to swap primary and replicant databases
(getting the old tire into the trunk).


 I'm thinking that nbackup locks the master file while keeping track of
 changed pages in a separate file. Perhaps a transaction id consolidation
 similar to what happens on backup/restore could be performed on a locked
 database master while logging updates in a separate file, and then bring
 the consolidated master up to date again.

nbackup works at the page level which is simpler than handling record
level changes.  Unlike records, pages never go away, nor do they
change their primary identifier.


 If this is very difficult, perhaps there's no point - devel resources
 better spent elsewhere. But if it would be a fairly simple task...?

Alas, I doubt that its simple.


Best regards,

Ann

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Meaning of RDB$RELATION_FIELDS.RDB$UPDATE_FLAG

2012-01-03 Thread Ann Harrison
On Tue, Jan 3, 2012 at 4:56 PM, Mark Rotteveel m...@lawinegevaar.nl wrote:

 The definitive sign of a computed field is RDB$COMPUTED_BLR in either
 RDB$FIELDS or RDB$DOMAINS if the field is defined through a domain

 There is no RDB$DOMAINS ...

Too many databases... indeed, it's RDB$FIELDS not system.domains.

Cheers,

Ann

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Firebird Transaction ID limit solution

2012-01-02 Thread Ann Harrison
On Mon, Jan 2, 2012 at 2:32 PM, Yi Lu y...@ramsoft.biz wrote:
 Approach 1 seems to be least risky. Disk space should not be a big issue with
 today's hardware.


The problem is not disk space, but locality of reference.  With small
records, adding
four bytes could decrease the number of rows per page by 10-15%, leading to more
disk I/O.  That's a significant cost to every database which can be
avoided by using
a slightly more complicated variable length transaction id or a flag
that indicates which
size is used for a particular record.  Firebird already checks the
flags to determine
whether a record is fragmented, so the extra check adds negligible overhead

And, as an aside, sweeping is not read-only now.  It's purpose is to
remove unneeded
old versions of records from the database.  The actual work may be
done by the garbage
collect thread, but the I/O is there.

Good luck,

Ann

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Database shutdown when udf raises an exception

2011-12-08 Thread Ann Harrison
On Thu, Dec 8, 2011 at 3:19 PM, Jesus Garcia jeg...@gmail.com wrote:


 If using superserver, when the abnormally shutdown starts, if another thread
 is committing and updating a big amount of pages of several types, the
 commit process can be terminated without finish, and some pages are writen
 and another not. Could this happen?.

Yes, but it doesn't matter.  The last page written is the page that
marks the transaction as committed.   Until that is written, the
updates made by the transaction will be removed when
the record is revisited after the server restarts.

 I ask it because i have had corruptions with interbase 2007 and 2009 when
 the engine termitnate abnormally, some of them heavy, with forced writes. I
 know IB is not FB, but IB people also say that with forced writes there is
 no corruption, but is not true.'

Sorry, the last time I had any influence on InterBase was late July 2000.  I can
say for sure that Firebird has fixed a number of problems, including a
recent change
that makes forced writes actually work for Linux.

 By the moment Firebird 2.5.1 is quite stable and the engine is working for
 months without problems, but i would like to know it to feel better.

Perhaps others on the list will describe their experiences.
Architecturally, it should work,
but the difference between architecture and implementation is bugs.
And bugs happen.
Firebird has the advantage of having thousands of eyes (well, scores
at least) on the code.

Good luck,

Ann

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Trace API - What's the unit for number of (reads |fetches ...)

2011-11-14 Thread Ann Harrison
On Mon, Nov 14, 2011 at 5:11 PM, Thomas Steinmaurer 
t...@iblogmanager.comwrote:


 Ok, but is there a way then to tell how many pages have been fetched
 from the cache as the number above for fetched is more likely
 referenced and not real number of pages fetched from memory?


Pages aren't fetched from cache.  Once a page is in cache, data is fetched
from it.  Sometimes the data is a record, to be expanded into a record
buffer, sometimes it's an index node, sometimes it's a page number from an
offset on a pointer page, or the state of a transaction from a TIP, or ...


 I guess the same applies to MON$IO_STATS.MON$PAGE_FETCHES? If so, isn't
 comparing MON$PAGE_READS with MON$PAGE_FETCHES a bit misleading if one
 wants to check to possibly increase the database page buffers?


It's more misinterpreted than misleading.  You would never increase the
number of pages in the cache to reduce the number of fetches.  In an ideal
world, the number of fetches would be enormous and the number of reads
would be *infinitesimal, meaning that all most every request was resolved
from cache and therefor the cache is big enough.  The only ways (I can
think of at the moment) to reduce the number of fetches are to do less
work, or work more efficiently (e.g. don't do a count (*) on a big table).*




Cheers,

Ann
--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


[Firebird-devel] [FB-Tracker] Created: (CORE-3588) More detail in message wrong page type

2011-08-29 Thread Ann Harrison (JIRA)
More detail in message wrong page type


 Key: CORE-3588
 URL: http://tracker.firebirdsql.org/browse/CORE-3588
 Project: Firebird Core
  Issue Type: Improvement
  Components: Engine
 Environment: All
Reporter: Ann Harrison
Priority: Trivial


Twenty years ago, concise error messages made some sense, but diagnosing the 
wrong page type errors would be much easier if Firebird said expected Index 
Page, encountered Data Page rather than expected n encountered m - at 
least it would save me looking up the page types each time.  Probably not worth 
much since those errors are mostly found in older versions, but still, if one 
realized that something that should have been an index page was something else, 
then there would be a clue that the workaround would be to rebuild the index.  
Hey, maybe even include the name of the index and table... 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://tracker.firebirdsql.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] Association of transactions, connections, and two-phase commit

2011-08-29 Thread Ann Harrison
On Mon, Aug 29, 2011 at 5:57 PM, Vlad Khorsun
hv...@users.sourceforge.netwrote:


  Another issue here is that when a non-recoverable error
  happens (e.g. request synchronisation error),

Hmm... is it unrecoverable error ? BTW, request synchronisation error
 could happen only
 at fetching records, iirc...


Between a prepare and a commit, the only irrecoverable errors I know of are
disk errors and communication errors.  All data is already on disk and there
is no additional processing (give or take post-commit triggers). But there
could be a fatal disk error in changing the transaction state, or the
machine that ran that part of the transaction could have dropped into the
San Andreas  fault.

Both happy thoughts.

Cheers,

Ann


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2devFirebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel


Re: [Firebird-devel] column naming rules

2011-06-15 Thread Ann Harrison
On Wed, Jun 15, 2011 at 8:34 AM, Treeve Jelbert tre...@scarlet.be wrote:
 I have been converting some code which was written to access a database from
 python.

 The origin code was for postgresql/sqlite but Firebird objects to some names
 of the type '_xyz'. Apparently the leading underscore has special meaning in
 Python.

 After changing the affected names, every seems to work with Firebird-2.5

 What does the SQL standard say about such names?

Here's what the 2008 standard has to say...  The first character of an
unquoted identifier is an identifier start.  Subsequent characters
are identifier extend.

An identifier start is any character in the Unicode General
Category classes “Lu”, “Ll”, “Lt”, “Lm”, “Lo”, or “Nl”.

NOTE 77 — The Unicode General Category classes “Lu”, “Ll”, “Lt”, “Lm”,
“Lo”, and “Nl” are assigned to Unicode characters that are,
respectively, upper-case letters, lower-case letters, title-case
letters, modifier letters, other letters, and letter numbers.

2) An identifier extend is U+00B7, “Middle Dot”, or any character in
the Unicode General Category classes “Mn”, “Mc”, “Nd”, “Pc”, or “Cf”.

NOTE 78 — The Unicode General Category classes “Mn”, “Mc”, “Nd”, “Pc”,
and “Cf” are assigned to Unicode characters that are, respectively,
nonspacing marks, spacing combining marks, decimal numbers, connector
punctuations, and formatting codes.

(Page 148 of the Foundation document)


 Should I complain to the writers of the original code?


You might, but I doubt they'll be happy to hear from you.


Good luck,


Ann

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel