When I start drbd 8.3.12 (elrepo build) I get the same message, this is my 
drbd.conf content. I also included the bug report

--------------------------------------------------------------------------------------------------
Drbd.conf
--------------------------------------------------------------------------------------------------
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#

global {
    minor-count 64;
    usage-count yes;
}


common {
  syncer { 
  rate 200M;
  verify-alg sha1;
#  csums-alg sha1;
  al-extents 3733;
  cpu-mask 3;
  }
}

resource VMstore1 {

  protocol C;

  startup {
    wfc-timeout  1800; # 30 min
    degr-wfc-timeout 120;    # 2 minutes.
    wait-after-sb;
#    become-primary-on both;
  }

  disk {
   no-disk-barrier;
   no-disk-flushes;
  }

  net {
    max-buffers 8000;
    max-epoch-size 8000;
    sndbuf-size 0;
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  }
  syncer{
    cpu-mask 300;
  }


 on vmhost1a.vdl-fittings.local {
    device     /dev/drbd0;
    disk       /dev/sda4;
    address    192.168.100.17:7788;
    meta-disk  internal;
  }
 on vmhost1b.vdl-fittings.local {
    device    /dev/drbd0;
    disk      /dev/sda4;
    address   192.168.100.18:7788;
    meta-disk internal;
  }
}

--------------------------------------------------------------------------------------------------
abrt bug report file
--------------------------------------------------------------------------------------------------
Duplicate check
=====


Common information
=====
package
-----
kernel

kernel
-----
2.6.32-220.7.1.el6.x86_64

architecture
-----
x86_64



Additional information
=====
kernel_tainted_long
-----
Taint on warning.

kernel_tainted
-----
512

backtrace
-----
WARNING: at block/blk-core.c:1296 __make_request+0x525/0x5a0() (Not tainted) 
Hardware name: H8DGU
block: BARRIER is deprecated, use FLUSH/FUA instead Modules linked in: drbd(U) 
ip6table_filter ip6_tables iptable_filter ip_tables ebtable_nat ebtables 
autofs4 sunrpc bridge 8021q garp stp llc bonding ipv6 vhost_net macvtap macvlan 
tun kvm_amd kvm qlcnic igb dca microcode serio_raw sg k10temp amd64_edac_mod 
edac_core edac_mce_amd i2c_piix4 i2c_core shpchp ext4 mbcache jbd2 sr_mod cdrom 
sd_mod crc_t10dif hpsa(U) pata_acpi ata_generic pata_atiixp ahci usb_storage 
dm_mirror dm_region_hash dm_log dm_mod [last unloaded: mperf]
Pid: 3158, comm: cqueue Not tainted 2.6.32-220.7.1.el6.x86_64 #1 Call Trace:
[<ffffffff81069a17>] ? warn_slowpath_common+0x87/0xc0 [<ffffffff81069b06>] ? 
warn_slowpath_fmt+0x46/0x50 [<ffffffff81251af5>] ? __make_request+0x525/0x5a0 
[<ffffffffa03b81b8>] ? drbd_queue_work+0x58/0x70 [drbd] [<ffffffffa03be611>] ? 
__drbd_set_state+0x7b1/0xf40 [drbd] [<ffffffff8124ff42>] ? 
generic_make_request+0x2b2/0x5c0 [<ffffffff8106a2af>] ? 
release_console_sem+0x1cf/0x220 [<ffffffff812502df>] ? submit_bio+0x8f/0x120 
[<ffffffffa03b5444>] ? _drbd_md_sync_page_io+0x124/0x350 [drbd] 
[<ffffffffa03b5754>] ? drbd_md_sync_page_io+0xe4/0x6b0 [drbd] 
[<ffffffff8107bf8c>] ? lock_timer_base+0x3c/0x70 [<ffffffffa03bf015>] ? 
drbd_md_sync+0x205/0x5c0 [drbd] [<ffffffffa03c885e>] ? 
_drbd_set_state.clone.0+0x4e/0x60 [drbd] [<ffffffffa03ce406>] ? 
drbd_nl_disk_conf+0xe46/0x10d0 [drbd] [<ffffffff8127124d>] ? 
rb_insert_color+0x9d/0x160 [<ffffffff811ec4f7>] ? sysfs_link_sibling+0xe7/0x130 
[<ffffffff811ecee5>] ? sysfs_addrm_finish+0x25/0x270 [<ffffffff811ed50c>] ? 
sysfs_add_one+0x2c/0x130 [<ffffffff8125866a>] ? add_disk+0xca/0x160 
[<ffffffffa03c9d1f>] ? drbd_connector_callback+0x13f/0x2b0 [drbd] 
[<ffffffff81337970>] ? cn_queue_wrapper+0x0/0x50 [<ffffffff81337998>] ? 
cn_queue_wrapper+0x28/0x50 [<ffffffff8108b150>] ? worker_thread+0x170/0x2a0 
[<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40 [<ffffffff8108afe0>] ? 
worker_thread+0x0/0x2a0 [<ffffffff81090726>] ? kthread+0x96/0xa0 
[<ffffffff8100c14a>] ? child_rip+0xa/0x20 [<ffffffff81090690>] ? 
kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20


hostname
-----
vmhost1a.vdl-fittings.local

component
-----
kernel

time
-----
1333613283

cmdline
-----
ro root=UUID=8d29fbdd-c257-4f68-8b1d-4b5192ce7e87 rd_NO_LUKS LANG=en_US.UTF-8  
KEYBOARDTYPE=pc KEYTABLE=us-acentos rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 
rhgb crashkernel=133M@0M rd_NO_LVM rd_NO_DM elevator=deadline

analyzer
-----
Kerneloops

kernel_tainted_short
-----
---------W

reason
-----
WARNING: at block/blk-core.c:1296 __make_request+0x525/0x5a0() (Not tainted)

os_release
-----
CentOS release 6.2 (Final)





-----Oorspronkelijk bericht-----
Van: [email protected] 
[mailto:[email protected]] Namens Florian Haas
Verzonden: donderdag 5 april 2012 9:52
Aan: Ryan Shannon
CC: [email protected]
Onderwerp: Re: [DRBD-user] block: BARRIER is deprecated, use FLUSH/FUA instead

On Wed, Apr 4, 2012 at 10:29 PM, Ryan Shannon <[email protected]> wrote:
> Hi folks,
>
> We are running a file-server cluster using centos6, drbd, corosync, 
> and pacemaker. Each time drbd is started, a kernel oops is triggered 
> (see below).

Nope. That's a call trace, but not an oops.

> I have the 'no-disk-barrier' option enabled in /etc/drbd.conf.

Are you absolutely positive it really applies to the resource in question, as 
per "drbdsetup /dev/drbd0 show"?

I'd be surprised if I saw that "BARRIER is deprecated" warning on a system that 
used both an up to date RHEL kernel, and DRBD 8.3.12. Of course, something 
could be messed up in the ELrepo build (incorrectly applied compatibility 
wrappers at build time), or something may be fishy in DRBD itself. A not 
entirely dissimilar issue was fixed in 8.4 recently, but it shouldn't affect 
8.3 really so that would be a bit of a long shot.

At any rate, no way to tell without verifying that the no-disk-barrier option 
is in fact enabled on your running resource.

Hope this helps,
Florian

--
Need help with High Availability?
http://www.hastexo.com/now
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to