Re: [Bitcoin-development] Scaling at the end user level

2012-02-08 Thread grarpamp
 Have any groups published proposals for distributing
 a weekly precomputed bootstrap chain?
 rsync? db_dump  git  db_load?
 There is also 50% or more compression available in the index
 and chain.

 I have proposed packaging part of the block chain (doesn't even have to be
 weekly, just until the last checkpoint), but people fear it runs contrary to
 the distributed approach of Bitcoin.

Git repos are backed by strong hashes. Each commit could be a single
block dump, perhaps into a file hierarchy. Trusted entities, pools, etc
could sign at a checkpoint/height. Blockchain tools
would need made that can take the blk* and export single blocks and
process/export up to a certain block and quit. Everyone would do a
comparison and sign a commit hash. Everyone else git pulls. Having
the block toolset is a key prequisite to any sort of distribution. They don't
exist now :( Maybe the two bitcoin compatible library projects out there
will implement them :) Torrents are also strongly hashed and could be
signed as well.

Making the blockchain tools would be the most important thing to start.

 BTW: On such an old computer you should probably use one of the thin
 clients.

If that means not validating the chain, then it's as above. I'm not sure if it's
right to not care about the history if only making new transactions with a new
key post install time and then only validating new transactions as they come
in. Will have a look.

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Scaling at the end user level

2012-02-07 Thread grarpamp
 I never did track down this exact issue but it's an artificial
 slowdown.. meaning compression and whatever else wouldn't help much.

I meant for anyone who wanted to distribute the dataset as a project.

 It has something to do with the database file locking and flushing..
 on some systems I've seen the block chain get fully done in 10-20
 mins and on others it slows down to the point where it will never
 catch up.. but not in a way that's related to the age of the computer
 or anything. You might want to experiment if you want to track this
 down.. try building your own libs

Rather than use dated/modified packages, I compiled current versions
of all component sources manually.

 and compare different operating
 systems, on the same hardware to get a more 'true' comparison maybe.

True. Used them all before, happy with BSD for now. Just knowing
what the general setup is on those zippy systems should suffice.
ie: blindly fishing for such a zippy system to compare through various
install tests doesn't sound too appealing. It's different than benchmarking.

Datapoint: The system below is not zippy.

 I think everyone is vaguely aware of the problem but it has not
 been tracked down and eliminated. I don't think the problem is
 within bitcoin itself but in how truthfully the database file is
 actually written to disk.

Am I correct in guessing that, given a certain height, the data
in blkindex and blk0001 should be the same across instances?

# file blk*
blk0001.dat: data
blkindex.dat:Berkeley DB (Btree, version 9, native byte-order)

Pursuant to comparison, what is the format of blk0001.dat?

 If it really gets flushed to disk every
 block like bitcoin wants it to be, then there is no way that you
 could get more than 50-60 blocks per second through it (due to
 rotational latency), but on some operating systems and versions/options
 it seems to end up caching the writes and flies through it at
 thousands of blocks per second. The problem is similar to what's
 mentioned here: http://www.sqlite.org/faq.html#q19

I'm not running Linux with asynchronous data and metadata
turned on by default if that's what you mean :) ZFS, disk crypto,
standard drive write cache. Looking at it, I'm largely buried in
that crypto at 8MB/sec or so.

 Perhaps it's as simple as some default in the db lib.. and it seems
 to default to different things on different version/operating
 systems/filesystems.

Hmm, I compiled everything with the defaults. Will go back and
look at bdb options. I don't think there was anything interesting
there. I'd bet a lot is tied to the fs and cpu.
Single core p4@1.8 512k/2g isn't much up against ZFS+disk crypto.

It seems to take its time and roll up all but the last database file (of
a hundred or more) on receiving sigterm. Is it supposed to roll
and delete the last log too? Can I safely delete everything but
the blk* files? (wallet excepted of course :)

Currently, in KiB...

running:
853716  database
747881  blk0001.dat
290601  blkindex.dat
4361addr.dat
137 __db.005
137 __db.004
137 __db.003
137 __db.002
41  __db.006
25  __db.001

sigterm:
750569  blk0001.dat
291497  blkindex.dat
8465database/log.000nnn
4361addr.dat

database/log.000133: Berkeley DB (Log, version 16, native byte-order)

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development