On Thu, Aug 18, 2011 at 12:38 AM, Matthias G. Eckermann m...@suse.com wrote:
Ah, sure. Sorry. Packages for blocxx for:
Fedora_14 Fedora_15
RHEL-5 RHEL-6
SLE_11_SP1
openSUSE_11.4 openSUSE_Factory
are available in the openSUSE buildservice at:
On Wed, Feb 22, 2012 at 07:14:44PM +, Alex wrote:
[Referring to https://lkml.org/lkml/2012/1/17/381], and perhaps I'm a bit
previous, but what are the command sequence to change the raid levels?
Wouldn't mind being pointed to git manual if better for you.
Look at
OK.
On Wed, Feb 22, 2012 at 8:58 PM, Duncan 1i5t5.dun...@cox.net wrote:
qasdfgtyuiop posted on Tue, 21 Feb 2012 20:11:06 +0800 as excerpted:
I'm using GNU/linux with btrfs root. My filesystem is created with
command mkfs.btrfs /dev/sda . Today I'm trying to install Microsoft
Windows 7 on
On Thursday 2012-02-23 10:57, Andrew Morton wrote:
But there's more,
24931 ?S 0:00 \_ [btrfs-endio-met]
\_ [kconservative/5]
\_ [ext4-dio-unwrit]
[with a wondersome patch:] $ grep Name /proc/{29431,29432}/stat*
autosnap code is available either end of this week or early
next week and what you will notice is autosnap snapshots
are named using uuid.
Main reason to drop time-stamp based names is that,
- test (clicking on Take-snapshot button) which took more
than one snapshot per second was
Thank you very much Ilya.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thursday 23 of February 2012 20:02:38 Anand Jain wrote:
autosnap code is available either end of this week or early
next week and what you will notice is autosnap snapshots
are named using uuid.
Main reason to drop time-stamp based names is that,
- test (clicking on
On Thu, Feb 23, 2012 at 10:12:26AM +0800, Liu Bo wrote:
On 02/23/2012 01:43 AM, Chris Mason wrote:
Normally I just toss patches into git, but this one is pretty subtle and
I wanted to send it around for extra review. QA at Oracle did a test
where they unplugged one drive of a btrfs raid1
On Thu, 23 Feb 2012 12:19:28 +0100 (CET) Jan Engelhardt jeng...@medozas.de
wrote:
On Thursday 2012-02-23 10:57, Andrew Morton wrote:
But there's more,
24931 ?S 0:00 \_ [btrfs-endio-met]
\_ [kconservative/5]
On Thursday 2012-02-23 18:30, Andrew Morton wrote:
Teach ps(1) to look in /proc/pid/status for kernel threads?
To what end? The name in /proc/pid/status was also limited to
TASK_COMM_LEN.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
Hi,
My kernel version is 32-bit 3.2.0-rc5 and using btrfs-tools 0.19
I was having performance issues with BTRFS with fragmentation and
HDDs, so I decided to switch to an SSD to see if these would go away.
Performance was much better but at times, I would see a freeze
happen which I can't really
On 02/13/2012 04:17 PM, Ralf-Peter Rohbeck wrote:
Hello,
is it possible to set nodatacow on a per-file basis? I couldn't find
anything.
If not, wouldn't that be a great feature to get around the performance
issues with VM and database storage? Of course cloning should still
cause COW.
Hello,
(When we have this I shall update the btrfs wiki)
As promised an article was posted here sometime back.
Writing patch for btrfs
http://btrfs.ipv5.de/index.php?title=Writing_patch_for_btrfs
It was long waiting in my mail draft as kernel.org was down
sorry for the delay.
-Anand
--
Thanks for the inputs. there is no clear winner as of now.
Let me keep the uuid for now, if more sysadmin feel timestamp
is better we could device it that way.
-Anand
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
I'd like to vote for timestamp/timestamp-uuid as a sysadmin. The
timestamp allows for easy conversion from clients' wants to actual
commands: I need my data from two days ago is easy when I have
timestamps to use.
On 2/23/2012 10:05 PM, Anand Jain wrote:
Thanks for the inputs. there is
Nik Markovic posted on Thu, 23 Feb 2012 20:31:02 -0600 as excerpted:
I noticed a few errors in the script that I used. I corrected it and it
seems that degradation is occurring even at fully random writes:
I don't have an ssd, but is it possible that you're simply seeing erase-
block related
16 matches
Mail list logo