The method I was using was essentially
grep VmRSS /proc/$pid/status
Comparing these two methods, I get
Your method (PSS):
My method (RSS):
VmRSS: 2410396 kB
On Oct 21, 2015, at 12:29 AM, Tom Zander wrote:
> On Tuesday 20 Oct 2015
Data compression adds latency and reduces predictability, so engineers have
decided to leave compression to application layers instead of transport layer
or lower in order to let the application designer decide what tradeoffs to make.
On Nov 11, 2015, at 10:49 AM, Marco Pontello via bitcoin-dev
> My nodes are continuously running getblocktemplate and getinfo, and I also
> suspected the issue is in either gbt or the rpc server.
> The instance only takes a few hours to get up to that memory usage.
> On Oct 18, 2015 8:59 AM, "J
>> My nodes are continuously running getblocktemplate and getinfo, and I also
>> suspected the issue is in either gbt or the rpc server.
>> The instance only takes a few hours to get up to that memory usage.
>> On Oct 18, 2015 8:59 AM, "Jonathan Toom
I am leaning towards supporting a can kick proposal. Features I think are
desirable for this can kick:
0. Block size limit around 2 to 4 MB. Maybe 3 MB? Based on my testnet data, I
think 3 MB should be pretty safe.
1. Hard fork with a consensus mechanisms similar to BIP101
2. Approximately 1 or
On Dec 9, 2015, at 8:09 AM, Gregory Maxwell wrote:
> On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim wrote:
> By contrast it does not reduce the safety factor for the UTXO set at
> all; which most hold as a much greater concern in general;
I don't agree
On Dec 8, 2015, at 6:02 AM, Gregory Maxwell via bitcoin-dev
> The particular proposal amounts to a 4MB blocksize increase at worst.
I understood that SegWit would allow about 1.75 MB of data in the average case
while also allowing up to 4 MB of
On Dec 9, 2015, at 7:50 AM, Jorge Timón wrote:
> I don't undesrtand. SPV nodes won't think they are validating transactions
> with the new version unless they adapt to the new format. They will be simply
> unable to receive payments using the new format if it is a softfork
Agree. This data does not belong in the coinbase. That space is for miners to
use, not devs.
I also think that a hard fork is better for SegWit, as it reduces the size of
fraud proofs considerably, makes the whole design more elegant and less
kludgey, and is safer for clients who do not
On Dec 9, 2015, at 7:48 AM, Luke Dashjr wrote:
> How about we pursue the SegWit softfork, and at the same time* work on a
> hardfork which will simplify the proofs and reduce the kludgeyness of merge-
> mining in general? Then, if the hardfork is ready before the softfork, they
This means that a server supporting SW might only hear of the tx data and not
get the signature data for some transactions, depending on how the relay rules
worked (e.g. if the SW peers had higher minrelaytxfee settings than the legacy
peers). This would complicate fast block relay code like
1. I think we should limit the sum of the block and witness data to
nBlockMaxSize*7/4 per block, for a maximum of 1.75 MB total. I don't like the
idea that SegWit would give us 1.75 MB of capacity in the typical case, but we
have to have hardware capable of 4 MB in adversarial conditions (i.e.
Off-topic: If you want to decentralize hashing, the best solution is probably
to redesign p2pool to use DAGs. p2pool would be great except for the fact that
the 30 sec share times are (a) long enough to cause significant reward variance
for miners, but (b) short enough to cause hashrate loss
On Dec 30, 2015, at 3:49 PM, Jonathan Toomim wrote:
> Since we've been relying on the trustworthiness of miners during soft forks
> in the past (and it only failed us once!), why not
make it explicit?
(Sorry for the premature send.)
Description: Message signed
On Dec 30, 2015, at 11:00 AM, Bob McElrath via bitcoin-dev
> joe2015--- via bitcoin-dev [firstname.lastname@example.org] wrote:
>> That's the whole point. After a conventional hardfork everyone
>> needs to upgrade, but there is no way to
On Dec 25, 2015, at 3:15 AM, Ittay via bitcoin-dev
> Treating the pool block withholding attack as a weapon has bad connotations,
> and I don't think anyone directly condones such an attack.
I directly condone the use of block withholding attacks
Another option for how to deal with block withholding attacks: Give the miner
who finds the block a bonus. This could even be part of the coinbase
Block withholding is effective because it costs the attacker 0% and costs the
pool 100%. If the pool's coinbase tx was 95% to the
On Dec 26, 2015, at 8:44 AM, Pieter Wuille via bitcoin-dev
> Furthermore, 75% is pretty terrible as a switchover point, as it guarantees
> that old nodes will still see a 25% forked off chain temporarily.
Yes, 75% plus a grace period is better. I
I traveled around in China for a couple weeks after Hong Kong to visit with
miners and confer on the blocksize increase and block propagation issues. I
performed an informal survey of a few of the blocksize increase proposals that
I thought would be likely to have widespread support. The
As a first impression, I think this proposal is intellectually interesting, but
crufty and hackish and should never actually be deployed. Writing code for
Bitcoin in a future in which we have deployed a few generalized softforks this
way sounds terrifying.
Instead of this:
On Dec 30, 2015, at 6:19 AM, Peter Todd wrote:
> Your fear is misplaced: it's trivial to avoid recursion with a bit of
That makes some sense. I downgrade my emotions from "a future in which we have
deployed a few generalized softforks this way sounds
On Dec 18, 2015, at 10:30 AM, Pieter Wuille via bitcoin-dev
> 1) The risk of an old full node wallet accepting a transaction that is
> invalid to the new rules.
> The receiver wallet chooses what address/script to accept coins on.
written in code, but as a social contract. Breaking those rules would
> be considered as a hardfork and is allowed only in exceptional situation.
> Jonathan Toomim via bitcoin-dev 於 2015-12-29 07:42 寫到:
>> That sounds like a rather unlikely scenario. Unless you have a
It appears you're using the term "compression ratio" to mean "size reduction".
A compression ratio is the ratio (compressed / uncompressed). A 1 kB file
compressed with a 10% compression ratio would be 0.1 kB. It seems you're using
(1 - compressed/uncompressed), meaning that the compressed file
On Feb 7, 2016, at 9:24 AM, jl2...@xbt.hk wrote:
> You are making a very naïve assumption that miners are just looking for
> profit for the next second. Instead, they would try to optimize their short
> term and long term ROI. It is also well known that some miners would mine at
> a loss, even
On Feb 6, 2016, at 9:21 PM, Jannes Faber via bitcoin-dev
> They *must* be able to send their customers both coins as separate
Supporting the obsolete chain is unnecessary. Such support has not been offered
in any cryptocurrency
On Feb 7, 2016, at 7:19 AM, Anthony Towns via bitcoin-dev
> The stated reasoning for 75% versus 95% is "because it gives "veto power"
> to a single big solo miner or mining pool". But if a 20% miner wants to
> "veto" the upgrade, with a 75%
> On Feb 25, 2016, at 9:56 PM, Gregory Maxwell wrote:
> The batching was
> temporarily somewhat hobbled between 0.10 and 0.12 (especially when
> you had any abusive frequently pinging peers attached), but is now
> fully functional again and it now manages to batch many
The INV scheme used by Bitcoin is not very efficient at all. Once you take into
account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV
takes 193 bytes, according to wireshark. That's 127 bytes for the INV message
and 66 bytes for the ACK. All of this is for 32 bytes of
Well, here's another idea: we could shorten the tx hashes to about 4 to 6 bytes
instead of 32.
Let's say we have a 1 GB mempool with 2M transactions in it. A 4 byte shorthash
would have a 0.046% chance of resulting in a collision with another transaction
in our mempool, assuming a random
Just checking to see if I understand this optimization correctly. In order to
find merkle roots in which the rightmost 32 bits are identical (i.e. partial
hash collisions), we want to compute as many merkle root hashes as quickly as
possible. The fastest way to do this is to take the top level
Ethically, this situation has some similarities to the DAO fork. We have an
entity who closely examined the code, found an unintended characteristic of
that code, and made use of that characteristic in order to gain tens of
millions of dollars. Now that developers are aware of it, they want to
Mail list logo