On 2014-04-30 21:10, Jens Axboe wrote:
On 2014-04-30 17:18, Lakshmi wrote:
Hi Jens,
   I am trying to run the below job,
============================
[global]
buffered=0
ioengine=libaio
iodepth=8

[job1]
read_iolog=/home/TRACE_REPLAY_2/file_4391136.bin
=============================================
and I get the error,

job1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8
fio-2.1.7
Starting 1 process
fio: io_u error on file /dev/sda: Invalid argument
      write offset=38400921600, buflen=4160
fio: io_u error on file /dev/sda: Invalid argument
      write offset=38400921600, buflen=4160
fio: pid=30366, err=22/file:io_u.c:1550, func=io_u error,
error=Invalid argument

job1: (groupid=0, jobs=1): err=22 (file:io_u.c:1550, func=io_u error,
error=Invalid argument): pid=30366: Wed Apr 30 18:01:06 2014
   cpu          : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=6
   IO depths    : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%,
32=0.0%, >=64=0.0%

======================================
Any idea what is the reason for this?Please let me know if I am
missing something here.
Attached is the binary file that I am replaying.

The trace looks corrupt. The value is supposed to be 4096, or 1048576 in
the native format. But it's natively 1074790400, which is 0x40 in front
of the 0x100000. I'll take a look at the trace in detail tomorrow to
find out what's up.

So I dumped the trace from the offending location:

magic           65617407
sequence        0
time            0
sector          75001800
bytes           4160
action          1a0001
pid             4391136
device          800000
cpu             0
error           0
pdu_len         0

and there's not much fio can do to reply that, since direct=1 is set. But the trace looks suspect, since blktrace uses sectors internally. So there's no way that should be something not a multiple of the sector size of the device.

pid looks suspect, too. That's 0x4300e0.

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to