On Tue, Jun 26, 2012 at 10:47:56AM -0600, Stefan Priebe wrote: > Am 25.06.2012 22:23, schrieb Josef Bacik: > > On Mon, Jun 25, 2012 at 02:20:31PM -0600, Stefan Priebe wrote: > >> Am 25.06.2012 22:11, schrieb Josef Bacik: > >>> On Mon, Jun 25, 2012 at 01:33:09PM -0600, Stefan Priebe wrote: > >>>> With v3.4 the same. Can't go back more as this really results in very > >>>> fast corruption. Any ideas how to debug? > >>>> > >>> > >>> What workload are you running? I have a ssd here with discard support I > >>> can try > >>> and reproduce on. Thanks, > >> > >> i'm using fio with 50 jobs and randwrite of 4k blocks in ceph but i > >> don't know which load ceph then exactly generates. ;-( > >> > > > > Thats fine, I have this handy "create a local ceph cluster" script from an > > earlier problem, just send me your fio job and I'll run it locally. Thanks, > > Where you able to find anything? Can i do more or different testing? >
I can't reproduce so I'm going to have to figure out a way to debug it through you, as soon as I think of something I will let you know. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html