Just as an aside to this lengthy convo, the Cryptonote-based BCN recently
had some interesting updates which made it easier for ordinary computers
(nothing special) to handle it.
I realize that's not Bitcoin, but I thought I'd throw it out there.
> Thanks Mike.
>
> Indeed, I am aware of current a
Thanks Mike.
Indeed, I am aware of current approach, which is why I was suggesting
this as an alternative.
I haven't thought about it enough, and perhaps it was too radical a
rethinking - just wanted to see what the smarter minds thought.
Thanks again.
-Randi
On 7/5/14, 4:43 AM, Mike Hearn
>
> Is it possible instead to allocate a portion of the reward to " a # of
> runner up(s)" even though the runner-up(s) block will be orphaned?
>
There's really no concept of a "runner up" because hashing is progress
free. It's unintuitive and often trips people up. There's no concept that
everyon
Hi All,
This is a bit tangential to the conversation, but since the genesis of
this conversation is Mike's decentralization blog post, I decided to
post here.
Perhaps the solution to the mining problem lies in the reward structure
rather than in the proof of work/asics.
Is it possible instead
On Friday, July 04, 2014 8:21:42 PM Jorge Timón wrote:
> On 7/4/14, kjj wrote:
> > I suspect that there exist no algorithms which cannot be done better in
> > an application-specific device than in a general purpose computer. And
> > if there is such a thing, then it must necessarily perform best
On 7/4/14, kjj wrote:
> I suspect that there exist no algorithms which cannot be done better in
> an application-specific device than in a general purpose computer. And
> if there is such a thing, then it must necessarily perform best on one
> specific platform, making that platform the de facto
Agreed. If the POW is most efficient on general purpose CPUs, that
means Intel, AMD and maybe IBM would be the only entities capable of
producing competitive mining equipment.
Aaron
Aaron Voisine
breadwallet.com
On Fri, Jul 4, 2014 at 11:39 AM, Ron Elliott wrote:
> I feel everyone should re-r
I feel everyone should re-read that last paragraph as it carries the most
weight IMO.
On Fri, Jul 4, 2014 at 9:50 AM, kjj wrote:
> Just some general comments on this topic/discussion.
>
> I suspect that there exist no algorithms which cannot be done better in
> an application-specific device th
Just some general comments on this topic/discussion.
I suspect that there exist no algorithms which cannot be done better in
an application-specific device than in a general purpose computer. And
if there is such a thing, then it must necessarily perform best on one
specific platform, making t
Yup, no need to apologise. If nothing else the conversations get archived
where other people can use them to get up to speed faster. A lot of these
discussions get spread across forums, lists and IRC so it can be hard to
know what the current state of the art thinking is.
Recall the second prong o
On Friday 04 July 2014 04:37:26 Gregory Maxwell wrote:
[excellent explanation removed for brevity]
> > Apologies in advance if this is a stupid idea.
>
> No need to be sorry— talking about these things is how people learn.
> While I don't think this idea is good, and I'm even skeptical about
> f
On Fri, Jul 4, 2014 at 3:27 AM, Andy Parkins wrote:
> Hello,
>
> I had a thought after reading Mike Hearn's blog about it being impossible to
> have an ASIC-proof proof of work algorithm.
>
> Perhaps I'm being dim, but I thought I'd mention my thought anyway.
Thanks for sharing. Ideas similar to
On Friday 04 July 2014 07:22:19 Alan Reiner wrote:
> I think you misundersood using ROMix-like algorithm, each hash
I did. Sorry.
> requires a different 32 MB of the blockchain. Uniformly distributed
> throughout the blockchain, and no way to predict which 32 MB until you
> have actually
On Fri, Jul 04, 2014 at 06:53:47AM -0400, Alan Reiner wrote:
> Something similar could be applied to your idea. We use the hash of a
> prevBlockHash||nonce as the starting point for 1,000,000 lookup
> operations. The output of the previous lookup is used to determine
> which block and tx (perhap
On 07/04/2014 07:15 AM, Andy Parkins wrote:
> On Friday 04 July 2014 06:53:47 Alan Reiner wrote:
>
>> ROMix works by taking N sequential hashes and storing the results into a
>> single N*32 byte lookup table. So if N is 1,000,000, you are going to
>> compute 1,000,000 and store the results into
On Friday 04 July 2014 06:53:47 Alan Reiner wrote:
> ROMix works by taking N sequential hashes and storing the results into a
> single N*32 byte lookup table. So if N is 1,000,000, you are going to
> compute 1,000,000 and store the results into 32,000,000 sequential bytes
> of RAM. Then you are
Just a thought on this -- I'm not saying this is a good idea or a bad
idea, because I have spent about zero time thinking about it, but
something did come to mind as I read this. Reading 20 GB of data for
every hash might be a bit excessive. And as the blockchain grows, it
will become infeasible
17 matches
Mail list logo