OK great, thanks!


On 10/23/24 16:08, Oyvind Albrigtsen wrote:
In that case I would report the bug to Ubuntu.


Oyvind

On 23/10/24 15:49 +0300, Murat Inal wrote:
  Hi Oyvind,

  I checked out PR1924 and exacty applied it to my test cluster.

  Problem still exists. Rules do not get deleted, only created.

  Note that;

  - My cluster runs Ubuntu Server 24.04

  - grep is GNU 3.11

  - Switches -qE are valid & exist in grep man page.

  On 10/23/24 14:43, Oyvind Albrigtsen wrote:

    This could be related to the following PR:
    [1]https://github.com/ClusterLabs/resource-agents/pull/1924/files

    The github version of portblock works fine on Fedora 40, so that's my
    best guess.

    Oyvind

    On 22/10/24 21:44 +0300, Murat Inal wrote:

      Hello Oyvind,

      Using your suggestion, I located the issue at function
      chain_isactive().

      This function greps the "generated" rule string (via function
      active_grep_pat()) in the rule table. Generated string does NOT match
      with iptables output anymore. Consequently, RA decides that the rule
      is ABSENT, although it is PRESENT.

      I opted to use "iptables --check" command for rule existence
      detection. Below is the function with modification comments;

      #chain_isactive  {udp|tcp} portno,portno ip chain
      chain_isactive()
      {
          [ "$4" = "OUTPUT" ] && ds="s" || ds="d"
          #PAT=$(active_grep_pat "$1" "$2" "$3" "$ds") # grep pattern
          #$IPTABLES $wait -n -L "$4" | grep "$PAT" >/dev/null                                                  # old detection line
          iptables -C "$4" -p "$1" -${ds} "$3" -m multiport --${ds}ports
      "$2" -j DROP                # new detection using iptables --check/-C
      }

      I tested the modified RA with both actions (block & unblock). It
      works. If you agree with the above, active_grep_pat() has NO use, it
      can be deleted from the script.

      On 10/21/24 12:25, Oyvind Albrigtsen wrote:

        I would try running "pcs resource debug-stop --full <resource>" to
        see
        what's happening, and try to run the "iptables -D" line manually if
        it
        doesnt show you an error.

        Oyvind

        On 18/10/24 21:45 +0300, Murat Inal wrote:

          Hi Oyvind,

          Probably current portblock has a bug. It CREATES netfilter rule on
          start(), however DOES NOT DELETE the rule on stop().

          Here is the configuration of my simple 2 node + 1 qdevice cluster;

          node 1: node-a-knet \
              attributes standby=off
          node 2: node-b-knet \
              attributes standby=off
          primitive r-porttoggle portblock \
              params action="" direction=out ip=172.16.0.1 portno=1234
          protocol=udp \
              op monitor interval=10s timeout=10s \
              op start interval=0s timeout=20s \
              op stop interval=0s timeout=20s
          primitive r-vip IPaddr2 \
              params cidr_netmask=24 ip=10.1.6.253 \
              op monitor interval=10s timeout=20s \
              op start interval=0s timeout=20s \
              op stop interval=0s timeout=20s
          colocation c1 inf: r-porttoggle r-vip
          order o1 r-vip r-porttoggle
          property cib-bootstrap-options: \
              have-watchdog=false \
              dc-version=2.1.6-6fdc9deea29 \
              cluster-infrastructure=corosync \
              cluster-name=testcluster \
              stonith-enabled=false \
              last-lrm-refresh=1729272215

          - I checked the switchover and observed netfilter chain (watch
          sudo iptables -L OUTPUT) real-time,

          - Tried portblock with parameter direction=out & both.

          - Checked if the relevant functions IptablesBLOCK() &
          IptablesUNBLOCK() are executing (by inserting syslog mark messages
          inside). They do run.

          However rule is ONLY created, NEVER deleted.

          Any suggestions?

          On 10/9/24 11:26, Oyvind Albrigtsen wrote:

            Correct. That should block the port when the resource is stopped
            on a
            node (e.g. if you have it grouped with the service you're using
            on the
            port).

            I would do some testing to ensure it works exactly as you
            expect. E.g.
            you can telnet to the port, or you can run nc/socat on the port
            and
            telnet to it from the node it blocks/unblocks. If it doesnt
            accept
            the connection you know it's blocked.

            Oyvind Albrigtsen

            On 06/10/24 22:46 GMT, Murat Inal wrote:

              Hello,

              I'd like to confirm with you the mechanism of
              ocf:heartbeat:portblock.

              Given a resource definition;

              Resource: r41_LIO (class=ocf provider=heartbeat
              type=portblock)
                Attributes: r41_LIO-instance_attributes
                  action=""
                  ip=10.1.8.194
                  portno=3260
                  protocol=tcp

              - If resource starts, TCP:3260 is UNBLOCKED.

              - If resource is stopped, TCP:3260 is BLOCKED.

              Is that correct? If action="" it will run just the
              opposite, correct?

              To toggle a port, a single portblock resource is enough,
              correct?

              Thanks,

              _______________________________________________
              Manage your subscription:
              [2]https://lists.clusterlabs.org/mailman/listinfo/users

              ClusterLabs home: [3]https://www.clusterlabs.org/

            _______________________________________________
            Manage your subscription:
            [4]https://lists.clusterlabs.org/mailman/listinfo/users

            ClusterLabs home: [5]https://www.clusterlabs.org/

          _______________________________________________
          Manage your subscription:
          [6]https://lists.clusterlabs.org/mailman/listinfo/users

          ClusterLabs home: [7]https://www.clusterlabs.org/

        _______________________________________________
        Manage your subscription:
        [8]https://lists.clusterlabs.org/mailman/listinfo/users

        ClusterLabs home: [9]https://www.clusterlabs.org/

      _______________________________________________
      Manage your subscription:
      [10]https://lists.clusterlabs.org/mailman/listinfo/users

      ClusterLabs home: [11]https://www.clusterlabs.org/

    _______________________________________________
    Manage your subscription:
    [12]https://lists.clusterlabs.org/mailman/listinfo/users

    ClusterLabs home: [13]https://www.clusterlabs.org/

Links:
1. https://github.com/ClusterLabs/resource-agents/pull/1924/files
2. https://lists.clusterlabs.org/mailman/listinfo/users
3. https://www.clusterlabs.org/
4. https://lists.clusterlabs.org/mailman/listinfo/users
5. https://www.clusterlabs.org/
6. https://lists.clusterlabs.org/mailman/listinfo/users
7. https://www.clusterlabs.org/
8. https://lists.clusterlabs.org/mailman/listinfo/users
9. https://www.clusterlabs.org/
10. https://lists.clusterlabs.org/mailman/listinfo/users
11. https://www.clusterlabs.org/
12. https://lists.clusterlabs.org/mailman/listinfo/users
13. https://www.clusterlabs.org/

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to