Consider a large database (200GB, large tables with 450 Million rows) which
is running on a kick-a** server with pool of
enterprise SSDs for storage (more IOPS then Firebird could ever use), which I
need to extract data from on a regular basis
throughout the day for use by an external BI
From: Leyne, Sean
I need to extract the BI data as a true snapshot of data (ensuring FKs are
valid), in as short a timeframe as possible.
Because runtime is critical, I want to break the extract process into logical
pieces and run each piece is a separate
process/thread (with its own
02.04.2014 0:59, Leyne, Sean wrote:
Consider a large database (200GB, large tables with 450 Million rows) which
is running on a kick-a** server with pool of enterprise SSDs for storage
(more IOPS then Firebird could ever use), which I need to extract data from
on a regular basis throughout
On 04/02/14 13:28, Vlad Khorsun wrote:
Instead, i think, we could implement something like clone transaction
which will:
- get master transaction number at input
One detail should be taken into an account here - what if user passes
incorrect transaction number on input.
- create new
Instead, i think, we could implement something like clone transaction
which will:
- get master transaction number at input
One detail should be taken into an account here - what if user passes
incorrect transaction number on input.
Of course there should be error raised and no
On 04/02/14 15:20, Vlad Khorsun wrote:
Note, it have no sence for read-committed transactions (as they have
no snapshot to share\clone) and for
consistency write transactions (as they will conflict on relations locks
on concurrent write attempts). I.e.
it will work for concurrency
On 04/02/14 15:20, Vlad Khorsun wrote:
Note, it have no sence for read-committed transactions (as they have
no snapshot to share\clone) and for
consistency write transactions (as they will conflict on relations locks
on concurrent write attempts). I.e.
it will work for concurrency
Hello, All.
What is the reason why VIO_erase() always creates a new record version
instead of using
update_in_place() as it does VIO_modify()?
--
WBR, SD.
--
Firebird-Devel mailing list, web interface at
02.04.2014 20:59, Dimitry Sibiryakov wrote:
What is the reason why VIO_erase() always creates a new record version
instead of using
update_in_place() as it does VIO_modify()?
Can you imagine the same record being deleted twice?
Dmitry
02.04.2014 19:38, Dmitry Yemanov wrote:
02.04.2014 20:59, Dimitry Sibiryakov wrote:
What is the reason why VIO_erase() always creates a new record version
instead of using
update_in_place() as it does VIO_modify()?
Can you imagine the same record being deleted twice?
No. But what it
02.04.2014 22:03, Dimitry Sibiryakov wrote:
What is the reason why VIO_erase() always creates a new record version
instead of using
update_in_place() as it does VIO_modify()?
Can you imagine the same record being deleted twice?
No. But what it changes?
VIO_update() calls
02.04.2014 20:12, Dmitry Yemanov wrote:
VIO_erase() cannot delete some record twice in the same
transaction, period.
But it can delete updated record, no?..
--
WBR, SD.
--
Firebird-Devel mailing list, web
02.04.2014 22:16, Dimitry Sibiryakov wrote:
VIO_erase() cannot delete some record twice in the same
transaction, period.
But it can delete updated record, no?..
Yes, but this is still going to be the last version in the chain.
Perhaps nobody cared whether there will be two or three versions
As far as I can tell, there is no way for this to be done.
I'd said you need common snapshot. I.e. few transactions should use the
same snapshot view of database.
Correct.
Instead, i think, we could implement something like clone transaction
Clone Transaction would meet my
1 million transaction a day means less than 20 transaction per second.
Yes, if the transactions where evenly throughout the day -- over a 10-12 hour
work day, we are talking about 40-50 transactions per second.
200 GB on SSD storage is a matter of minutes to be copied.
It depends on
Alex,
From Sean's letter I've understood that he needs read-only access. If we
limit ourself with consistensy read mode only does not it seem that
implementation promises to be much simpler?
I did outline a read-only requirement, but I can also think of cases where
read/write support via
02.04.2014 21:44, Leyne, Sean wrote:
I need to extract the data and generate text file -- but I don't need all of
rows.
What's wrong with an ordinary concurrency level transaction then?
--
WBR, SD.
--
From: Leyne, Sean
I need to extract the BI data as a true snapshot of data (ensuring FKs are
valid), in as short a timeframe as possible.
Because runtime is critical, I want to break the extract process into
logical pieces and run each piece is a separate process/thread (with its own
02.04.2014 21:44, Leyne, Sean wrote:
I need to extract the data and generate text file -- but I don't need all of
rows.
What's wrong with an ordinary concurrency level transaction then?
I need all of the connections to see the same view of data.
With ordinary transactions, it is
02.04.2014 22:04, Leyne, Sean wrote:
I need all of the connections to see the same view of data.
What's wrong with single connection?
I mean that your requirement to perform one-time export in minimal time is
questionable. What real problem are you trying to solve?
--
WBR, SD.
Hi,
you can adapt my FAQ example
http://itstop.pl/en-en/Porady/Firebird/FAQ2/FIRST-SNAPSHOT
but starting all transactions as First Snapshot transactions in many
connections where system is still working i suppose is quite to impossible
with it.
But question is - why you need to have many
As far as I can tell, there is no way for this to be done.
I'd said you need common snapshot. I.e. few transactions should use the
same snapshot view of database.
Correct.
Instead, i think, we could implement something like clone transaction
Clone Transaction would meet my
On 4/2/2014 5:28 AM, Vlad Khorsun wrote:
C
I think it will be not easy to implement from the Y-valve point of view
- i see no simple way to pass handles from one process
(coordinator) to another (worker). Especially, in the case of Classic Server.
Instead, i think, we could
On 4/2/2014 5:28 AM, Vlad Khorsun wrote:
C
I think it will be not easy to implement from the Y-valve point of
view - i see no simple way to pass handles from one
process
(coordinator) to another (worker). Especially, in the case of Classic
Server.
Instead, i think, we could
-Original Message-
From: Dimitry Sibiryakov [mailto:s...@ibphoenix.com]
Sent: Wednesday, April 02, 2014 4:09 PM
To: For discussion among Firebird Developers
Subject: Re: [Firebird-devel] How to? Coordinating transactions for multiple
connections in single call
02.04.2014 22:04,
you can adapt my FAQ example
http://itstop.pl/en-en/Porady/Firebird/FAQ2/FIRST-SNAPSHOT
but starting all transactions as First Snapshot transactions in many
connections where system is still working i suppose is quite to impossible
with it.
At first glance it does seem to be appropriate
Vlad,
In my proposition transactions is fully independent and isolated as usual.
The only difference of clone - is how it was created. After creation we have
usual transaction with usual behavior. Again, i not offer to share
transaction
- i offer to clone (or to copy) some of its
On 4/2/2014 7:05 PM, Vlad Khorsun wrote:
On 4/2/2014 5:28 AM, Vlad Khorsun wrote:
C
I think it will be not easy to implement from the Y-valve point of
view - i see no simple way to pass handles from one
process
(coordinator) to another (worker). Especially, in the case of Classic
On 4/2/2014 8:01 PM, Leyne, Sean wrote:
you can adapt my FAQ example
http://itstop.pl/en-en/Porady/Firebird/FAQ2/FIRST-SNAPSHOT
but starting all transactions as First Snapshot transactions in many
connections where system is still working i suppose is quite to impossible
with it.
At first
Jim,
But question is - why you need to have many connections in this case?
What is wrong with single connection.
I do not know what exactly are you trying to avoid, but it smell to
me as black-hole design.
As I answered to Dimitry S:
I need all of the connections to see the same
30 matches
Mail list logo