Last time, when I made test with my record level compresion I receive
this change:
DB size decrese from 90GB - 60 GB.
Some select count(*) from table like this one:
Create Table ProductDataEx (
idProduct TLongInt NOT NULL,
idMeasurand Smallint NOT NULL,
idMeasurementMode
Hi,
windows file compression using LZNT1
https://msdn.microsoft.com/en-us/library/jj711990.aspx
that is dictionary based compression like LZ4.
It is work in 64kB block that compress into smaller are with some free space
for update.
But any update is problematic and for MSSQL, Hyper-V and more is
Hi Karol,
this is not windows compresion.
This is record fragment compresion, that is done during when record is
putting into DB page.
New RLE has similar complexity as current RLE for compress, but more
efficient for decompres
Hi,
you misunderstood me
i say that i saw benefits when i apply
Decrease from ~150s(any run) - 52s for first run and 36s another run.
If are you interested, I can send you source code or publish compiled
FB3 for Windows x64.
Slavek
Hi,
results are optimistics but
can you make different test - compare write times not only reads?
i ask because i see the
Vlad,
I can't exactly recall concrete numbers from the past, but I wonder if
Firebird can currently go beyond 10MB/s disk I/O utilization. And this
is not about being limited by spinning disk and seek time (random I/O).
You'll be wonder if reevaluate concrete numbers...
Firebird 3 or
22.03.2015 12:43, Thomas Steinmaurer wrote:
I can't exactly recall concrete numbers from the past, but I wonder if
Firebird can currently go beyond 10MB/s disk I/O utilization. And this
is not about being limited by spinning disk and seek time (random I/O).
You'll be wonder if reevaluate
On Saturday, March 21, 2015, Thomas Steinmaurer t...@iblogmanager.com
mailto:t...@iblogmanager.com wrote:
IMHO 99% of the Firebird customer-base isn't in the distributed system
business, thus state-of-the art scale up (instead of scale out)
capabilities on a single server
22.03.2015 13:44, Thomas Steinmaurer пишет:
Vlad,
I can't exactly recall concrete numbers from the past, but I wonder if
Firebird can currently go beyond 10MB/s disk I/O utilization. And this
is not about being limited by spinning disk and seek time (random I/O).
You'll be wonder if
I can't exactly recall concrete numbers from the past, but I wonder if
Firebird can currently go beyond 10MB/s disk I/O utilization. And this
is not about being limited by spinning disk and seek time (random I/O).
You'll be wonder if reevaluate concrete numbers...
Firebird 3 or even
What compilers do you have in mind that do not support the core features ?
http://en.cppreference.com/w/cpp/compiler_support
I ask for sane c++11 features not all of them
On Sat, Mar 21, 2015 at 9:56 PM, James Starkey j...@jimstarkey.net wrote:
I ask again: Which platforms without a
22.03.2015 15:21, Thomas Steinmaurer wrote:
I'm confused. ;-)
Sorry ;)
With FB 2.5.2 SC 64-bit on Windows 7 Prof.
While copying a 18GB database from folder A to B on the same spinning
physical disk at ~33MB/s read + ~33MB/s write, thus 66MB/s total, doing
a select count(*) on that
I ask again: Which platforms without a conforming C++ 11 are you prepared
to write off?
On Saturday, March 21, 2015, marius adrian popa map...@gmail.com wrote:
I agree with you some c++11 features can make our codebase cleaner and
easier to read (i might say that is more pythonic
On Saturday, March 21, 2015, Thomas Steinmaurer t...@iblogmanager.com wrote:
IMHO 99% of the Firebird customer-base isn't in the distributed system
business, thus state-of-the art scale up (instead of scale out)
capabilities on a single server will be excellent.
Scale up is a very bad
The first and foremost question I must ask is what platforms that don't
support C++ 11 are you prepared to write off? It's possible, I suppose, to
write conditional code that supports for C++ 11 and legaccy C++, but from
much experience, that's the formula for a disaster. Code must debugged in
Jim,
I think it would be vastly better for Firebird to address operating across
cheap
commodity servers than to optimize for exotic -- and hyper-expensive --
servers.
Operating across servers is ... a cluster, which suggests MPI as the method to
distribute messages between the nodes...
Sean,
2015-03-20 17:15 GMT+01:00 Leyne, Sean s...@broadviewsoftware.com:
In the case of the PHI, having up to 61* helper processors which could be
responsible for performing sorting/grouping for *any* running query (so a
shared resource) would provide significant benefit. In the case of the
Jim,
The problem with specialized processors is that they are a scarce
resource that must be managed rather than shared. They're just dandy
when a server has a single specialized load, but on a server with
multiple clients, one guy gets the specialized processor and everyone one
else
Hi,
What about just The power of C++11 in Firebird ?
On 20 March 2015 at 23:44, James Starkey j...@jimstarkey.net wrote:
I think it would be extremely difficult to implement both fine grain
multi-threading and co-processor exploitation in a shared meta-data
implementation. If Firebird were
OpenCL 2.1 will be a c++ subset
http://www.anandtech.com/show/9039/khronos-announces-opencl-21-c-comes-to-opencl
I have cuda on my workstation (gtx 760) also i have a laptop with cuda gt
also you can use it in amazone ec2
https://aws.amazon.com/articles/7249489223918169
but if you really want
The problem with specialized processors is that they are a scarce resource
that must be managed rather than shared. They're just dandy when a server
has a single specialized load, but on a server with multiple clients, one
guy gets the specialized processor and everyone one else waits.
The best
Marius,
I wonder how we can use the power of cuda in the engine
http://devblogs.nvidia.com/parallelforall/power-cpp11-cuda-7/#more-4999
I don't think we should focus on CUDA specifically but on parallel processing.
There are a variety of technologies (OpenCL, OpenMP perhaps even OpenMPI)
21 matches
Mail list logo