Re: Replicable file-system corruption due to fsck/ufs

2019-04-10 Thread Warner Losh
On Wed, Apr 10, 2019 at 10:46 PM  wrote:

> Peter Holm  wrote:
>
> > I see this even with a single truncate on HEAD.
> >
> > $ ./truncate10.sh
> > 96 -rw-r--r--  1 root  wheel  1073741824 11 apr. 06:33 test
> > ** /dev/md10a
> > ** Last Mounted on /mnt
> > ** Phase 1 - Check Blocks and Sizes
> > INODE 3: FILE SIZE 1073741824 BEYOND END OF ALLOCATED FILE, SIZE SHOULD
> BE 268435456
> > ADJUST? yes
>
> Thanks.. I should have tested that myself.. doh! I was trying to closer
> replicate
> my real file that triggered the problem which contained a number of sparse
> areas.
>
> And thanks for adding Kirk to the discussion. I wanted to first be sure it
> wasn't
> just me :-)
>

I believe that this was added recently to detect corruption that happens
when a file is being appended when the system crashes.

Warner
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Replicable file-system corruption due to fsck/ufs

2019-04-10 Thread jamie
Peter Holm  wrote:

> I see this even with a single truncate on HEAD.
>
> $ ./truncate10.sh
> 96 -rw-r--r--  1 root  wheel  1073741824 11 apr. 06:33 test
> ** /dev/md10a
> ** Last Mounted on /mnt
> ** Phase 1 - Check Blocks and Sizes
> INODE 3: FILE SIZE 1073741824 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
> 268435456
> ADJUST? yes

Thanks.. I should have tested that myself.. doh! I was trying to closer 
replicate
my real file that triggered the problem which contained a number of sparse 
areas.

And thanks for adding Kirk to the discussion. I wanted to first be sure it 
wasn't
just me :-)

Cheers, Jamie
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Replicable file-system corruption due to fsck/ufs

2019-04-10 Thread Peter Holm
On Thu, Apr 11, 2019 at 04:47:43AM +0100, Jamie Landeg-Jones wrote:
> I've noticed a replicable disk corruption by fsck_ufs/ffs on sparse files.
> 
> This is on amd/64 12-stable-20190409, but I first noticed it on
> 12-stable-20190326.
> 
> I didn't notice it on my previous build of 12-stable-20190107, but I
> may not have had any relevant sparse files at the time, so I don't know
> if that version was affected. 12-release worked OK.
> 
> Here is a simplified replicable example. Thinking about it just now, I
> suspect it's triggered by files which end in sparseness.
> 
> Can anyone else replicate this, or has my machine gone nuts?
> 
> Cheers, Jamie
> 
>  | root@thompson# l
>  | total 12
>  | 4 drwxr-x---   2 root  wheel  -   512 11 Apr 04:08 ./
>  | 4 drwxr-xr-x  16 root  wheel  - 1,024 11 Apr 04:08 ../
>  | 4 -rw-r-   1 root  wheel  -43 11 Apr 04:08 typescript
>  |
>  | root@thompson# dd if=/dev/zero bs=1m count=2048 of=test.img
>  | 2048+0 records in
>  | 2048+0 records out
>  | 2147483648 bytes transferred in 4.127411 secs (520298036 bytes/sec)
>  |
>  | root@thompson# l
>  | total 2097708
>  |   4 drwxr-x---   2 root  wheel  -   512 11 Apr 04:08 ./
>  |   4 drwxr-xr-x  16 root  wheel  - 1,024 11 Apr 04:08 ../
>  | 2097696 -rw-r-   1 root  wheel  - 2,147,483,648 11 Apr 04:08 test.img
>  |   4 -rw-r-   1 root  wheel  -43 11 Apr 04:08 typescript
>  |
>  | root@thompson# mdconfig test.img
>  | md1
>  |
>  | root@thompson# newfs /dev/md1
>  | /dev/md1: 2048.0MB (4194304 sectors) block size 32768, fragment size 4096
>  | using 4 cylinder groups of 512.03MB, 16385 blks, 65664 inodes.
>  | super-block backups (for fsck_ffs -b #) at:
>  |  192, 1048832, 2097472, 3146112
>  |
>  | root@thompson# md mnt
>  | mnt
>  |
>  | root@thompson# mount /dev/md1 mnt
>  |
>  | root@thompson# cd mnt/
>  | ~/x/mnt ~/x
>  |
>  | root@thompson# df .
>  | Filesystem 1K-blocks Used Avail Capacity  Mounted on
>  | /dev/md1   2,031,1328 1,868,636 0%/root/x/mnt
>  |
>  | root@thompson# l
>  | total 12
>  | 4 drwxr-xr-x  3 root  wheel - 512 11 Apr 04:09 ./
>  | 4 drwxr-x---  3 root  wheel - 512 11 Apr 04:09 ../
>  | 4 drwxrwxr-x  2 root  operator  - 512 11 Apr 04:09 .snap/
>  |
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
>  |
>  | root@thompson# l
>  | total 652
>  |   4 drwxr-xr-x  3 root  wheel -   512 11 Apr 04:14 ./
>  |   4 drwxr-x---  3 root  wheel -   512 11 Apr 04:09 ../
>  |   4 drwxrwxr-x  2 root  operator  -   512 11 Apr 04:09 .snap/
>  | 640 -rw-r-  1 root  wheel - 9,663,676,605 11 Apr 04:14 test
>  |
>  | root@thompson# sha256 -r test > sha256.out
>  |
>  | root@thompson# cd ..
>  | ~/x ~/x/mnt
>  |
>  | root@thompson# umount mnt
>  |
>  | root@thompson# fsck /dev/md1
>  | ** /dev/md1
>  | ** Last Mounted on /root/x/mnt
>  | ** Phase 1 - Check Blocks and Sizes
>  | INODE 4: FILE SIZE 9663676605 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
> 1342210048
>  | ADJUST? [yn] y
>  |
>  | ** Phase 2 - Check Pathnames
>  | ** Phase 3 - Check Connectivity
>  | ** Phase 4 - Check Reference Counts
>  | ** Phase 5 - Check Cyl groups
>  | 4 files, 163 used, 507620 free (20 frags, 63450 blocks, 0.0% fragmentation)
>  |
>  | * FILE SYSTEM IS CLEAN *
>  |
>  | * FILE SYSTEM WAS MODIFIED *
>  |
>  | root@thompson# fsck /dev/md1
>  | ** /dev/md1
>  | ** Last Mounted on /root/x/mnt
>  | ** Phase 1 - Check Blocks and Sizes
>  | PARTIALLY TRUNCATED INODE I=4
>  | SALVAGE? [yn] y
>  |
>  | INCORRECT BLOCK COUNT I=4 (1280 should be 256)
>  | CORRECT? [yn] y
>  |
>  | INODE 4: FILE SIZE 1342210048 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
> 268468224
>  | ADJUST? [yn] y
>  |
>  | ** Phase 2 - Check Pathnames
>  | ** Phase 3 - Check Connectivity
>  | ** Phase 4 - Check Reference Counts
>  | ** Phase 5 - Check Cyl groups
>  | FREE BLK COUNT(S) WRONG IN SUPERBLK
>  | SALVAGE? [yn] y
>  |
>  | SUMMARY INFORMATION BAD
>  | SALVAGE? [yn] y
>  |
>  | BLK(S) MISSING IN BIT MAPS
>  | SALVAGE? [yn] y
>  |
>  | 4 files, 35 used, 507748 free (20 frags, 63466 blocks, 0.0% fragmentation)
>  |
>  | * FILE SYSTEM IS CLEAN *
>  |
>  | * FILE SYSTEM WAS MODIFIED *
>  |
>  | root@thompson# fsck /dev/md1
>  | ** /dev/md1
>  | ** Last Mounted on 

Replicable file-system corruption due to fsck/ufs

2019-04-10 Thread Jamie Landeg-Jones
I've noticed a replicable disk corruption by fsck_ufs/ffs on sparse files.

This is on amd/64 12-stable-20190409, but I first noticed it on
12-stable-20190326.

I didn't notice it on my previous build of 12-stable-20190107, but I
may not have had any relevant sparse files at the time, so I don't know
if that version was affected. 12-release worked OK.

Here is a simplified replicable example. Thinking about it just now, I
suspect it's triggered by files which end in sparseness.

Can anyone else replicate this, or has my machine gone nuts?

Cheers, Jamie

 | root@thompson# l
 | total 12
 | 4 drwxr-x---   2 root  wheel  -   512 11 Apr 04:08 ./
 | 4 drwxr-xr-x  16 root  wheel  - 1,024 11 Apr 04:08 ../
 | 4 -rw-r-   1 root  wheel  -43 11 Apr 04:08 typescript
 |
 | root@thompson# dd if=/dev/zero bs=1m count=2048 of=test.img
 | 2048+0 records in
 | 2048+0 records out
 | 2147483648 bytes transferred in 4.127411 secs (520298036 bytes/sec)
 |
 | root@thompson# l
 | total 2097708
 |   4 drwxr-x---   2 root  wheel  -   512 11 Apr 04:08 ./
 |   4 drwxr-xr-x  16 root  wheel  - 1,024 11 Apr 04:08 ../
 | 2097696 -rw-r-   1 root  wheel  - 2,147,483,648 11 Apr 04:08 test.img
 |   4 -rw-r-   1 root  wheel  -43 11 Apr 04:08 typescript
 |
 | root@thompson# mdconfig test.img
 | md1
 |
 | root@thompson# newfs /dev/md1
 | /dev/md1: 2048.0MB (4194304 sectors) block size 32768, fragment size 4096
 | using 4 cylinder groups of 512.03MB, 16385 blks, 65664 inodes.
 | super-block backups (for fsck_ffs -b #) at:
 |  192, 1048832, 2097472, 3146112
 |
 | root@thompson# md mnt
 | mnt
 |
 | root@thompson# mount /dev/md1 mnt
 |
 | root@thompson# cd mnt/
 | ~/x/mnt ~/x
 |
 | root@thompson# df .
 | Filesystem 1K-blocks Used Avail Capacity  Mounted on
 | /dev/md1   2,031,1328 1,868,636 0%/root/x/mnt
 |
 | root@thompson# l
 | total 12
 | 4 drwxr-xr-x  3 root  wheel - 512 11 Apr 04:09 ./
 | 4 drwxr-x---  3 root  wheel - 512 11 Apr 04:09 ../
 | 4 drwxrwxr-x  2 root  operator  - 512 11 Apr 04:09 .snap/
 |
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 | root@thompson# echo "testing 1...2...3..." >> test ; truncate -s +1g test
 |
 | root@thompson# l
 | total 652
 |   4 drwxr-xr-x  3 root  wheel -   512 11 Apr 04:14 ./
 |   4 drwxr-x---  3 root  wheel -   512 11 Apr 04:09 ../
 |   4 drwxrwxr-x  2 root  operator  -   512 11 Apr 04:09 .snap/
 | 640 -rw-r-  1 root  wheel - 9,663,676,605 11 Apr 04:14 test
 |
 | root@thompson# sha256 -r test > sha256.out
 |
 | root@thompson# cd ..
 | ~/x ~/x/mnt
 |
 | root@thompson# umount mnt
 |
 | root@thompson# fsck /dev/md1
 | ** /dev/md1
 | ** Last Mounted on /root/x/mnt
 | ** Phase 1 - Check Blocks and Sizes
 | INODE 4: FILE SIZE 9663676605 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
1342210048
 | ADJUST? [yn] y
 |
 | ** Phase 2 - Check Pathnames
 | ** Phase 3 - Check Connectivity
 | ** Phase 4 - Check Reference Counts
 | ** Phase 5 - Check Cyl groups
 | 4 files, 163 used, 507620 free (20 frags, 63450 blocks, 0.0% fragmentation)
 |
 | * FILE SYSTEM IS CLEAN *
 |
 | * FILE SYSTEM WAS MODIFIED *
 |
 | root@thompson# fsck /dev/md1
 | ** /dev/md1
 | ** Last Mounted on /root/x/mnt
 | ** Phase 1 - Check Blocks and Sizes
 | PARTIALLY TRUNCATED INODE I=4
 | SALVAGE? [yn] y
 |
 | INCORRECT BLOCK COUNT I=4 (1280 should be 256)
 | CORRECT? [yn] y
 |
 | INODE 4: FILE SIZE 1342210048 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
268468224
 | ADJUST? [yn] y
 |
 | ** Phase 2 - Check Pathnames
 | ** Phase 3 - Check Connectivity
 | ** Phase 4 - Check Reference Counts
 | ** Phase 5 - Check Cyl groups
 | FREE BLK COUNT(S) WRONG IN SUPERBLK
 | SALVAGE? [yn] y
 |
 | SUMMARY INFORMATION BAD
 | SALVAGE? [yn] y
 |
 | BLK(S) MISSING IN BIT MAPS
 | SALVAGE? [yn] y
 |
 | 4 files, 35 used, 507748 free (20 frags, 63466 blocks, 0.0% fragmentation)
 |
 | * FILE SYSTEM IS CLEAN *
 |
 | * FILE SYSTEM WAS MODIFIED *
 |
 | root@thompson# fsck /dev/md1
 | ** /dev/md1
 | ** Last Mounted on /root/x/mnt
 | ** Phase 1 - Check Blocks and Sizes
 | PARTIALLY TRUNCATED INODE I=4
 | SALVAGE? [yn] y
 |
 | INCORRECT BLOCK COUNT I=4 (256 should be 128)
 | CORRECT? [yn] y
 |
 | INODE 4: FILE SIZE 268468224 BEYOND END OF ALLOCATED FILE, SIZE SHOULD BE 
134610944
 | ADJUST? [yn] y
 |
 | ** Phase 2 - Check Pathnames
 | ** Phase 3 - Check 

Re: Crontab Question

2019-04-10 Thread David Wolfskill
On Wed, Apr 10, 2019 at 04:34:49PM -0500, Software Info wrote:
> I see. I had however copied the output of env to the etc/crontab PATH line. 
> Wouldn’t that care for an environment issue though?
> 
> 
> Regards
> SI
> 

The execution search path has no (direct) bearing on the current working
directory (and vice versa).

Your script cannot assume any particular current working directory.  If
you need to set it, do so (in the script itself).

Peace,
david
-- 
David H. Wolfskill  da...@catwhisker.org
"The President is a coward." -- Kirsten Gillibrand

See http://www.catwhisker.org/~david/publickey.gpg for my public key.


signature.asc
Description: PGP signature


Re: Crontab Question

2019-04-10 Thread Doug McIntyre
No. Your CWD can't be copied to a PATH variable.

For cronjobs, assume nothing. Hard code all path names. Assume the
only things in the PATH are /bin:/usr/bin, otherwise give full path
names to the programs you want to run. Assume no environmental variables
are set, assume you are on the most basic setup possible (because you are).



On Wed, Apr 10, 2019 at 04:34:49PM -0500, Software Info wrote:
> I see. I had however copied the output of env to the etc/crontab PATH line. 
> Wouldn’t that care for an environment issue though?
> 
> 
> Regards
> SI
> 
> Sent from Mail for Windows 10
> 
> From: Jonathan Chen
> Sent: Wednesday, April 10, 2019 4:23 PM
> To: Software Info
> Cc: freebsd-stable@freebsd.org
> Subject: Re: Crontab Question
> 
> On Thu, 11 Apr 2019 at 09:14, Software Info  wrote:
> >
> > OK. So although the script is located in my home directory, it doesn’t 
> > start there?
> 
> Correct. You cannot make any assumptions about the environment.
> -- 
> Jonathan Chen 
> 
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Crontab Question

2019-04-10 Thread Jonathan Chen
On Thu, 11 Apr 2019 at 09:34, Software Info  wrote:
>
> I see. I had however copied the output of env to the etc/crontab PATH line. 
> Wouldn’t that care for an environment issue though?

When I say "environment", I mean it in the generic sense; including
working-directory.

However, best practise is to keep /etc/crontab as minimal as possible,
and to set up environ(7) within the invoked script-file.
Security-wise, you should just keep PATH in /etc/crontab to the
standard, and invoke your script with:
* * * * * user /path/to/my/script

This is especially important when running as root.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: Crontab Question

2019-04-10 Thread Walter Cramer

On Wed, 10 Apr 2019, Software Info wrote:

OK. So although the script is located in my home directory, it doesn???t 
start there?  Sorry but I don???t quite understand. Could you explain a 
little further please?


Both 'cp' and 'ls' are located in /bin.  But if I run the 'ls' command in 
/root, 'ls' can't find 'cp' (unless I tell it where to look) - even though 
/bin *is* in my PATH -


server7:/root # ls cp
ls: cp: No such file or directory
server7:/root # ls /bin/cp
/bin/cp

Where the system looks for *commands*, to execute, is different from where 
it looks for other files, which those commands use.  The latter is 
generally only the current directory (unless you tell it otherwise). 
When cron runs a script as root, "current directory" will be /root.


BUT - for security and other reasons, it would be better to have cron run 
your script as you (not root), and as '/home/me/myscript' (instead of 
adding your home directory to PATH in /etc/crontab).


-Walter___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: Crontab Question

2019-04-10 Thread Software Info
I see. I had however copied the output of env to the etc/crontab PATH line. 
Wouldn’t that care for an environment issue though?


Regards
SI

Sent from Mail for Windows 10

From: Jonathan Chen
Sent: Wednesday, April 10, 2019 4:23 PM
To: Software Info
Cc: freebsd-stable@freebsd.org
Subject: Re: Crontab Question

On Thu, 11 Apr 2019 at 09:14, Software Info  wrote:
>
> OK. So although the script is located in my home directory, it doesn’t start 
> there?

Correct. You cannot make any assumptions about the environment.
-- 
Jonathan Chen 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Crontab Question

2019-04-10 Thread Jonathan Chen
On Thu, 11 Apr 2019 at 09:14, Software Info  wrote:
>
> OK. So although the script is located in my home directory, it doesn’t start 
> there?

Correct. You cannot make any assumptions about the environment.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: Crontab Question

2019-04-10 Thread Software Info
OK. So although the script is located in my home directory, it doesn’t start 
there?  Sorry but I don’t quite understand. Could you explain a little further 
please?

Regards
SI

Sent from Mail for Windows 10

From: Jonathan Chen
Sent: Wednesday, April 10, 2019 3:50 PM
To: Software Info
Cc: freebsd-stable@freebsd.org
Subject: Re: Crontab Question

On Thu, 11 Apr 2019 at 08:18, Software Info  wrote:
>
> Hi All
> I am trying to schedule cron to run a script. The script is in my home 
> directory and so I added my home directory to the path file in /etc/crontab 
> below.
> PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin:/home:/home/me
>
> This is the crontab entry for the scheduled  task below.
> 49  14  *   *   1-5 rootmyscript.sh
>
> myscript.sh grabs a file from another directory if it is there. If not, it 
> says “file not uploaded”. If the file is there, it copies it to my home 
> directory, strips email addresses out of it and uses mailx to send emai to 
> those users. I keep getting the error that a number of files in my home 
> directory are missing but they are not.
>
> Please see errors below
> mv: rename *.csv to listing.csv: No such file or directory
> grep: listing.csv: No such file or directory
> /home/me/ipo-script.sh: cannot open body.txt: No such file or directory
> mv: rename /home/me/listing.txt to /home/me/listing.txt-10-04-19-1435: No 
> such file or directory
> mv: rename /home/me/listing.csv to /home/me/listing.csv-10-04-19-1435: No 
> such file or directory
> mv: rename /home/me/listing.txt-10-04-19-1435 to 
> /home/me/IPO-Backup-Files/listing.txt-10-04-19-1435: No such file or directory
> mv: rename /home/me/listing.csv-10-04-19-1435 to 
> /home/me/IPO-Backup-Files/listing.csv-10-04-19-1435: No such file or directory
>
> Because I added my home directory to the path in crontab, I am at a loss to 
> explain why this is still happening. Anyone have any ideas? Would really 
> appreciate some help.

You are assuming that the script starts in your home directory. It
doesn't, hence: "mv: rename *.csv to listing.csv: No such file or
directory"
-- 
Jonathan Chen 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Crontab Question

2019-04-10 Thread Jonathan Chen
On Thu, 11 Apr 2019 at 08:18, Software Info  wrote:
>
> Hi All
> I am trying to schedule cron to run a script. The script is in my home 
> directory and so I added my home directory to the path file in /etc/crontab 
> below.
> PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin:/home:/home/me
>
> This is the crontab entry for the scheduled  task below.
> 49  14  *   *   1-5 rootmyscript.sh
>
> myscript.sh grabs a file from another directory if it is there. If not, it 
> says “file not uploaded”. If the file is there, it copies it to my home 
> directory, strips email addresses out of it and uses mailx to send emai to 
> those users. I keep getting the error that a number of files in my home 
> directory are missing but they are not.
>
> Please see errors below
> mv: rename *.csv to listing.csv: No such file or directory
> grep: listing.csv: No such file or directory
> /home/me/ipo-script.sh: cannot open body.txt: No such file or directory
> mv: rename /home/me/listing.txt to /home/me/listing.txt-10-04-19-1435: No 
> such file or directory
> mv: rename /home/me/listing.csv to /home/me/listing.csv-10-04-19-1435: No 
> such file or directory
> mv: rename /home/me/listing.txt-10-04-19-1435 to 
> /home/me/IPO-Backup-Files/listing.txt-10-04-19-1435: No such file or directory
> mv: rename /home/me/listing.csv-10-04-19-1435 to 
> /home/me/IPO-Backup-Files/listing.csv-10-04-19-1435: No such file or directory
>
> Because I added my home directory to the path in crontab, I am at a loss to 
> explain why this is still happening. Anyone have any ideas? Would really 
> appreciate some help.

You are assuming that the script starts in your home directory. It
doesn't, hence: "mv: rename *.csv to listing.csv: No such file or
directory"
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Crontab Question

2019-04-10 Thread Software Info
Hi All
I am trying to schedule cron to run a script. The script is in my home 
directory and so I added my home directory to the path file in /etc/crontab 
below.
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:~/bin:/home:/home/me

This is the crontab entry for the scheduled  task below.
49  14  *   *   1-5 rootmyscript.sh

myscript.sh grabs a file from another directory if it is there. If not, it says 
“file not uploaded”. If the file is there, it copies it to my home directory, 
strips email addresses out of it and uses mailx to send emai to those users. I 
keep getting the error that a number of files in my home directory are missing 
but they are not. 

Please see errors below
mv: rename *.csv to listing.csv: No such file or directory
grep: listing.csv: No such file or directory
/home/me/ipo-script.sh: cannot open body.txt: No such file or directory
mv: rename /home/me/listing.txt to /home/me/listing.txt-10-04-19-1435: No such 
file or directory
mv: rename /home/me/listing.csv to /home/me/listing.csv-10-04-19-1435: No such 
file or directory
mv: rename /home/me/listing.txt-10-04-19-1435 to 
/home/me/IPO-Backup-Files/listing.txt-10-04-19-1435: No such file or directory
mv: rename /home/me/listing.csv-10-04-19-1435 to 
/home/me/IPO-Backup-Files/listing.csv-10-04-19-1435: No such file or directory

Because I added my home directory to the path in crontab, I am at a loss to 
explain why this is still happening. Anyone have any ideas? Would really 
appreciate some help.

Kind Regards
SI



Sent from Mail for Windows 10

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


FreeBSD CI Weekly Report 2019-04-07

2019-04-10 Thread Li-Wen Hsu
(bcc -current and -stable for more audience)

FreeBSD CI Weekly Report 2019-04-07
===

Here is a summary of the FreeBSD Continuous Integration results for
the period from 2019-04-01 to 2019-04-07.

During this period, we have:

* 1841 builds (96% passed, 4% failed) were executed on aarch64, amd64,
armv6, armv7, i386, mips, mips64, powerpc, powerpc64, powerpcspe,
riscv64, sparc64 architectures for head, stable/12, stable/11
branches.
* 314 test runs (34.7% passed, 65% unstable, 0.3% exception) were
executed on amd64, i386, riscv64 architectures for head, stable/12,
stable/11 branches.
* 4 doc buils (100% passed)

(The statistics from experimental jobs are omitted)

If any of the issues found by CI are in your area of interest or
expertise please investigate the PRs listed below.

The latest web version of this report is available at
https://hackmd.io/s/BymrvPI_4 and archive is available at
http://hackfoldr.org/freebsd-ci-report/, any help is welcome.

## Failing Tests

* https://ci.freebsd.org/job/FreeBSD-head-amd64-test/
* sys.geom.class.eli.online_resize_test.online_resize
  https://bugs.freebsd.org/237128

* https://ci.freebsd.org/job/FreeBSD-head-i386-test/
* sys.netmap.ctrl-api-test.main
  https://bugs.freebsd.org/237129
* sys.opencrypto.runtests.main
  https://bugs.freebsd.org/237130
* sys.kern.coredump_phnum_test.coredump_phnum
  WIP: https://reviews.freebsd.org/D18495
* lib.libc.sys.sendfile_test.fd_positive_shm_v4
* lib.libc.sys.sendfile_test.hdtr_negative_bad_pointers_v4
* lib.libc.gen.floatunditf_test.floatunditf
* lib.libc.stdio.printfloat_test.hexadecimal_rounding
* lib.msun.ctrig_test.test_small_inputs
* lib.msun.precision_test.t_precision
  https://bugs.freebsd.org/236936

* https://ci.freebsd.org/job/FreeBSD-stable-12-i386-test/
* sys.netmap.ctrl-api-test.main
  https://bugs.freebsd.org/237129
* sys.opencrypto.runtests.main
  https://bugs.freebsd.org/237130
* sys.kern.coredump_phnum_test.coredump_phnum
  WIP: https://reviews.freebsd.org/D18495

* https://ci.freebsd.org/job/FreeBSD-stable-11-amd64-test/
* usr.bin.procstat.procstat_test.kernel_stacks

* https://ci.freebsd.org/job/FreeBSD-stable-11-i386-test/
* sys.netmap.ctrl-api-test.main
  https://bugs.freebsd.org/237129
* sys.opencrypto.runtests.main
  https://bugs.freebsd.org/237130
* usr.bin.procstat.procstat_test.kernel_stacks
* local.kyua.* (31 cases)
* local.lutok.* (3 cases)
* lib.libc.sys.sendfile_test.fd_positive_shm_v4
* lib.libc.sys.sendfile_test.hdtr_negative_bad_pointers_v4

## Failing Tests (from experimental jobs)

* https://ci.freebsd.org/job/FreeBSD-head-amd64-dtrace_test/
* common.ip.t_dtrace_contrib.tst_ipv4localsctp_ksh
* common.ip.t_dtrace_contrib.tst_localsctpstate_ksh

* https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/
There are ~60 failing cases, including flakey ones, see
https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/lastCompletedBuild/testReport/
for more details

## Disabled Tests

* lib.libc.sys.mmap_test.mmap_truncate_signal
  https://bugs.freebsd.org/211924
* sys.fs.tmpfs.mount_test.large
  https://bugs.freebsd.org/212862
* sys.fs.tmpfs.link_test.kqueue
  https://bugs.freebsd.org/213662
* sys.kqueue.libkqueue.kqueue_test.main
  https://bugs.freebsd.org/233586
* usr.bin.procstat.procstat_test.command_line_arguments
  https://bugs.freebsd.org/233587
* usr.bin.procstat.procstat_test.environment
  https://bugs.freebsd.org/233588

## Oepn Issues

### New

* https://bugs.freebsd.org/237077 possible race in build:
/usr/src/sys/amd64/linux/linux_support.s:38:2: error: expected
relocatable expression

### In progress

* https://bugs.freebsd.org/236936 4 test cases failing on i386 after r345562

### Cause build fails

* [233735: Possible build race: genoffset.o /usr/src/sys/sys/types.h:
error: machine/endian.h: No such file or
directory](https://bugs.freebsd.org/233735)
* [233769: Possible build race: ld: error: unable to find library
-lgcc_s](https://bugs.freebsd.org/233769)

### Others

[Tickets related to testing@](https://preview.tinyurl.com/y9maauwg)

## Other News

* Some PF tests are filing for a while because of py27-pcap-0.6.5,
thanks for kp@'s report and bofh@ committed update of py-pcap-0.6.6.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Concern: ZFS Mirror issues (12.STABLE and firmware 19 .v. 20)

2019-04-10 Thread Karl Denninger
On 4/10/2019 08:45, Andriy Gapon wrote:
> On 10/04/2019 04:09, Karl Denninger wrote:
>> Specifically, I *explicitly* OFFLINE the disk in question, which is a
>> controlled operation and *should* result in a cache flush out of the ZFS
>> code into the drive before it is OFFLINE'd.
>>
>> This should result in the "last written" TXG that the remaining online
>> members have, and the one in the offline member, being consistent.
>>
>> Then I "camcontrol standby" the involved drive, which forces a writeback
>> cache flush and a spindown; in other words, re-ordered or not, the
>> on-platter data *should* be consistent with what the system thinks
>> happened before I yank the physical device.
> This may not be enough for a specific [RAID] controller and a specific
> configuration.  It should be enough for a dumb HBA.  But, for example, 
> mrsas(9)
> can simply ignore the synchronize cache command (meaning neither the on-board
> cache is flushed nor the command is propagated to a disk).  So, if you use 
> some
> advanced controller it would make sense to use its own management tool to
> offline a disk before pulling it.
>
> I do not preclude a possibility of an issue in ZFS.  But it's not the only
> possibility either.

In this specific case the adapter in question is...

mps0:  port 0xc000-0xc0ff mem
0xfbb3c000-0xfbb3,0xfbb4-0xfbb7 irq 30 at device 0.0 on pci3
mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd
mps0: IOCCapabilities:
1285c

Which is indeed a "dumb" HBA (in IT mode), and Zeephod says he connects
his drives via dumb on-MoBo direct SATA connections.

What I don't know (yet) is if the update to firmware 20.00.07.00 in the
HBA has fixed it.  The 11.2 and 12.0 revs of FreeBSD through some
mechanism changed timing quite materially in the mps driver; prior to
11.2 I ran with a Lenovo SAS expander connected to SATA disks without
any problems at all, even across actual disk failures through the years,
but in 11.2 and 12.0 doing this resulted in spurious retries out of the
CAM layer that allegedly came from timeouts on individual units (which
looked very much like a lost command sent to the disk), but only on
mirrored volume sets -- yet there were no errors reported by the drive
itself, nor did either of my RaidZ2 pools (one spinning rust, one SSD)
experience problems of any sort.   Flashing the HBA forward to
20.00.07.00 with the expander in resulted in the  *driver* (mps) taking
disconnects and resets instead of the targets, which in turn caused
random drive fault events across all of the pools.  For obvious reasons
that got backed out *fast*.

Without the expander 19.00.00.00 has been stable over the last few
months *except* for this circumstance, where an intentionally OFFLINE'd
disk in a mirror that is brought back online after some reasonably long
period of time (days to a week) results in a successful resilver but
then a small number of checksum errors on that drive -- always on the
one that was OFFLINEd, never on the one(s) not taken OFFLINE -- appear
and are corrected when a scrub is subsequently performed.  I am now on
20.00.07.00 and so far -- no problems.  But I've yet to do the backup
disk swap on 20.00.07.00 (scheduled for late week or Monday) so I do not
know if the 20.00.07.00 roll-forward addresses the scrub issue or not. 
I have no reason to believe it is involved, but given the previous
"iffy" nature of 11.2 and 12.0 on 19.0 with the expander it very well
might be due to what appear to be timing changes in the driver architecture.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Concern: ZFS Mirror issues (12.STABLE and firmware 19 .v. 20)

2019-04-10 Thread Andriy Gapon
On 10/04/2019 04:09, Karl Denninger wrote:
> Specifically, I *explicitly* OFFLINE the disk in question, which is a
> controlled operation and *should* result in a cache flush out of the ZFS
> code into the drive before it is OFFLINE'd.
> 
> This should result in the "last written" TXG that the remaining online
> members have, and the one in the offline member, being consistent.
> 
> Then I "camcontrol standby" the involved drive, which forces a writeback
> cache flush and a spindown; in other words, re-ordered or not, the
> on-platter data *should* be consistent with what the system thinks
> happened before I yank the physical device.

This may not be enough for a specific [RAID] controller and a specific
configuration.  It should be enough for a dumb HBA.  But, for example, mrsas(9)
can simply ignore the synchronize cache command (meaning neither the on-board
cache is flushed nor the command is propagated to a disk).  So, if you use some
advanced controller it would make sense to use its own management tool to
offline a disk before pulling it.

I do not preclude a possibility of an issue in ZFS.  But it's not the only
possibility either.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"