> On 18 Feb 2016, at 4:20 PM, Valery Smyslov <[email protected]> wrote: > >>> I tend to support this idea, but I think in this case the sub-puzzles must >>> be chained to deal with parallel solving. >>> Something like the following: >>> >>> Puzzle_data[0] = cookie >>> Puzzle_data[1] = Puzzle_solution[0] | cookie >>> Puzzle_data[2] = Puzzle_solution[1] | cookie >>> ... >>> Puzzle_data[n] = Puzzle_solution[n-1] | cookie >>> >>> Or probably someone could suggest more clever construction? >> >> I’m not really against this, but is parallel solving really an issue? >> It does give an advantage to an 8-core desktop or 16-core server over a >> 2-core laptop. >> OTOH it might mitigate the power disparity between smartphones and laptops >> because >> phones have 4, 6, or 8 cores, while most laptops tend to have 2. > > The primary goal of sub-puzzling (as I see it) is not to make puzzle equally > hard for all clients, but to make puzzle hardness more predictable.
Sure, but I’d rather not make the client disparity problem more severe. > As far as I understand you got the figures from your previous message > by sequential solutions of a given number of puzzles, didn't you? > Did you experimented with parallel puzzle solving on multicore CPUs? > Probably the effect of sub-puzzling in this case would be a bit different. > It's interesting to compare the results. My tests are single-CPU mostly out of convenience. I iterated over possible keys until 1, 4, or 16 solutions were found. This should be very easily parallelizable and times would be halved if I had used both cores. With a chain such as you’re suggesting, I could still parallelize. For each step I could still partition the key space and search in parallel until one solution was found before proceeding to the next step. Yoav _______________________________________________ IPsec mailing list [email protected] https://www.ietf.org/mailman/listinfo/ipsec
