I probably shouldn't even bother at this point, but morbid curiosity compels me to foolish ends.
What are you DOING with these files that contain NUL bytes that makes it permissible to simply drop the NUL bytes on the floor, with no explicit step like tr -d \\0 to remove them? How is your script anything but buggy, if it is relying on a program (shell) to drop certain input bytes, when there is no documentation stating that it does this? If you are reading a file that uses \0 as a separator between fields, then how it is *OK* for your script to mash the fields all together into one giant mess? Why aren't you reading the fields separately? Take for example the file /proc/1/cmdline on Debian 8 (with mostly default settings): wooledg@wooledg:~$ cat -vtue /proc/1/cmdline /lib/systemd/systemd^@--system^@--deserialize^@16^@wooledg@wooledg:~$ I count 4 NUL bytes there, each one presumably terminating a field of some kind. If you simply use $(< /proc/1/cmdline) in some naive script running under bash 2.0 ~ 4.3 then you get something like /lib/systemd/systemd--system--deserialize16 Again, how is a script that produces this as a value not simply buggy? Perhaps what you really wanted was something like: { read -r -d '' a; read -r -d '' b; read -r -d '' c; read -r -d '' d; } \ < /proc/1/cmdline which gives: wooledg@wooledg:~$ declare -p a b c d declare -- a="/lib/systemd/systemd" declare -- b="--system" declare -- c="--deserialize" declare -- d="16" I don't know what these fields *mean*, but that's OK, because I'm not the one using them in a script. But this would be a much better way of getting them, than a blind command substitution.