MY $25,000,000.00 INVESTMENT PROPOSAL WITH YOU AND IN YOUR COUNTRY.

2019-08-22 Thread Law firm(Eku and Associates)
-- 
Dear,
With due respect this is not spam or Scam mail, because I have
contacted you before and there was no response from you,I apologise if
the contents of this mail are contrary to your moral ethics, which I
feel may be of great disturbance to your person, but please treat this
with absolute confidentiality, believing that this email reaches you
in good faith. My contacting you is not a mistake or a coincidence
because God can use any person known or unknown to accomplish great
things.
I am a lawyer and I have an investment business proposal to offer you.
It is not official but should be considered as legal and confidential
business. I have a customer's deposit of $US25 million dollars ready
to be moved for investment if you can partner with us. We are ready to
offer you 10% of this total amount as your compensation for supporting
the transaction to completion. If you are interested to help me please
reply me with your full details as stated below:
(1) Your full names:
(2) Your address:
(3) Your occupation:
(4) Your mobile telephone number:
(5) Your nationality:
(6) Your present location:
(7) Your age:
So that I will provide you more details on what to do and what is
required for successful completion.
Note: DO NOT REPLY ME IF YOU ARE NOT INTERESTED AND WITHOUT THE ABOVE
MENTIONED DETAILS

Sincèrement vôtre,
Avocat Etienne Eku Esq.(Lawfirm)
Procureur principal. De Cabinet d’avocats de l’Afrique de l’ouest.
Skype:westafricalawfirm


conntrack --ignore-error proposal to fix delete races

2019-02-12 Thread William Ahern
conntrack -D suffers from a TOCTTOU race between querying the existing entries 
and deleting each entry one-by-one. Because entries could simply disappear 
because of a timeout this is an unavoidable race that makes -D unreliable.

Some users of conntrack have resorted to invoking conntrack in a loop. See, e.g.

  // Retry a few times because the conntrack command seems to fail at random.

from 
https://github.com/projectcalico/felix/blob/4bd4955/conntrack/conntrack.go#L81

Attached is a preliminary patch to add an --ignore-error option that behaves 
similar to the -f (force) option of common command-line utilities. It’s 
currently only obeyed in delete_cb as I’d like to get feedback on the 
semantics. The basic idea is that --ignore-error is provided a list of error 
selectors to ignore, where the error selectors can be taken from the err2str. 
For example --ignore-error=“delete+ENOENT” or just --ignore-error=delete causes 
delete_cb to continue even if the NFCT_Q_DESTROY operation failed. 

A simpler alternative might be to ignore destroy failures in delete_cb 
altogether.



ignore-error-20190212.patch
Description: ignore-error-20190212.patch


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-17 Thread Pablo Neira Ayuso
On Sat, Dec 15, 2018 at 01:00:01PM +0100, Arturo Borrero Gonzalez wrote:
> On Wed, 5 Dec 2018 at 12:25, Pablo Neira Ayuso  wrote:
> >
> > On Wed, Dec 05, 2018 at 12:18:30PM +0100, Arturo Borrero Gonzalez wrote:
> > [...]
> > > I would apply the -legacy renaming patch regardless. We already did this
> > > with arptables after the agreement @ NFWS. In fact, me sending the patch
> > > now (instead of last summer) is just my lack of time to write it earlier 
> > > :-)
> >
> > I'm going to apply your patch
> >
> > Author: Arturo Borrero Gonzalez 
> > Date:   Wed Nov 28 13:47:28 2018 +0100
> >
> > ebtables: legacy renaming
> >
> > OK?
> >
> 
> Ok!

http://git.netfilter.org/ebtables/commit/?id=6218f812d894fdd733d95c3c86b385f6f223a36a


Re: Proposal: Reduce void pointer arithmetic in favor of char pointers

2018-12-16 Thread Jan Engelhardt
On Sunday 2018-12-16 17:22, William Woodruff wrote:

>On 12/16/18 6:02 AM, Jan Engelhardt wrote:
>> Though illegal in standard C, clang does support the extension:
>> 
>> 11:59 a4:~ > echo -en 'int main() { void *x = &x; x = x + 1; }' | clang 
>> -x c -c -o /dev/null - -Wall
>> 11:59 a4:~ > 
>> (clang 6.0.1)
>> fra
>
>Sorry, I should have clarified: the source files in question are C++,
>not C. clang does indeed support the extension in C, but it's a strict
>error in C++ even with extern "C":

Ah yes. That should use char* casting.


Re: Proposal: Reduce void pointer arithmetic in favor of char pointers

2018-12-16 Thread William Woodruff
On 12/16/18 6:02 AM, Jan Engelhardt wrote:
> Though illegal in standard C, clang does support the extension:
> 
> 11:59 a4:~ > echo -en 'int main() { void *x = &x; x = x + 1; }' | clang 
> -x c -c -o /dev/null - -Wall
> 11:59 a4:~ > 
> (clang 6.0.1)
> fra

Sorry, I should have clarified: the source files in question are C++,
not C. clang does indeed support the extension in C, but it's a strict
error in C++ even with extern "C":

$ echo -en 'int main() { void *x = &x; x = x + 1; }' | clang -x 'c++' -
:1:34: error: arithmetic on a pointer to void
int main() { void *x = &x; x = x + 1; }
   ~ ^


Re: Proposal: Reduce void pointer arithmetic in favor of char pointers

2018-12-16 Thread Jan Engelhardt
On Saturday 2018-12-15 23:02, William Woodruff wrote:

>This program belongs to a framework that is built using clang
>and clang doesn't support void pointer arithmetic

Though illegal in standard C, clang does support the extension:

11:59 a4:~ > echo -en 'int main() { void *x = &x; x = x + 1; }' | clang 
-x c -c -o /dev/null - -Wall
11:59 a4:~ > 
(clang 6.0.1)



Proposal: Reduce void pointer arithmetic in favor of char pointers

2018-12-15 Thread William Woodruff
Hi,

I've been writing a program that uses the netfilter/libiptc
headers, and have run into a few macros and inline functions
that use void* for pointer arithmetic rather than char* (or
uint8_t*).

This program belongs to a framework that is built using clang
and clang doesn't support void pointer arithmetic, so expanding
these macros/triggering the inline function expansion causes a
hard compilation failure.

Would patches to convert the void* arithmetic into char* (or
uint8_t*) arithmetic be welcome?

Best,
William Woodruff


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-15 Thread Arturo Borrero Gonzalez
On Wed, 5 Dec 2018 at 12:25, Pablo Neira Ayuso  wrote:
>
> On Wed, Dec 05, 2018 at 12:18:30PM +0100, Arturo Borrero Gonzalez wrote:
> [...]
> > I would apply the -legacy renaming patch regardless. We already did this
> > with arptables after the agreement @ NFWS. In fact, me sending the patch
> > now (instead of last summer) is just my lack of time to write it earlier :-)
>
> I'm going to apply your patch
>
> Author: Arturo Borrero Gonzalez 
> Date:   Wed Nov 28 13:47:28 2018 +0100
>
> ebtables: legacy renaming
>
> OK?
>

Ok!

> > Also, once the patch is applied, we should consider a release of both
> > arptables and ebtables now that iptables contains the -nft variant and
> > is being used in the wild.
>
> That's fine with me.


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-05 Thread Pablo Neira Ayuso
On Wed, Dec 05, 2018 at 12:18:30PM +0100, Arturo Borrero Gonzalez wrote:
[...]
> I would apply the -legacy renaming patch regardless. We already did this
> with arptables after the agreement @ NFWS. In fact, me sending the patch
> now (instead of last summer) is just my lack of time to write it earlier :-)

I'm going to apply your patch

Author: Arturo Borrero Gonzalez 
Date:   Wed Nov 28 13:47:28 2018 +0100

ebtables: legacy renaming

OK?

> Also, once the patch is applied, we should consider a release of both
> arptables and ebtables now that iptables contains the -nft variant and
> is being used in the wild.

That's fine with me.


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-05 Thread Arturo Borrero Gonzalez
On 12/4/18 11:57 AM, Pablo Neira Ayuso wrote:
> On Tue, Dec 04, 2018 at 11:50:46AM +0100, Arturo Borrero Gonzalez wrote:
>> On 11/28/18 2:10 PM, Arturo Borrero Gonzalez wrote:
>>> On 11/28/18 1:44 PM, Arturo Borrero Gonzalez wrote:
>>>> Hi,
>>>>
>>>> Now that the iptables.git repo offers arptables-nft and ebtables-nft,
>>>> arptables.git holds arptables-legacy, etc, why we don't just rename the
>>>> repos?
>>>>
>>>> * from arptables.git to arptables-legacy.git
>>>> * from ebtables.git to ebtables-legacy.git
>>>>
>>>> This rename should help distros understand the differences between them
>>>> and better accommodate the packaging of all the related tooling.
>>>>
>>>> Mind that the rename may have side effects in tarball
>>>> generation/publishing etc. I would expect the new arptables tarball to
>>>> include the '-legacy' keyword, and same for ebtables.
>>>>
>>>> If we go ahead with the rename, a new release is worth having,
>>>> announcing these changes as well.
>>>>
>>>
>>> Also,
>>>
>>> please consider applying the attached patch.
>>>
>>
>> ping :-)
> 
> Phil suggested no rename of the trees, I can update the description in
> git.netfilter.org to place LEGACY there. Concern as you mentioned is
> that it may break existing links/scripts. Not sure git support
> redirections from old repo URI to new one...
> 

Most people use these tools from distributions and if using directly
from git.netfilter.org they won't have problems finding a new URL. If
manually downloading tarball from netfilter.org, even less problem.
Distro packagers would have to refresh the upstream URL, sure, but
that's really a minor thing compared to the big -legacy -nft movement,
which requires a lot of other renaming and adjustments anyway.

My suggestion of the rename of the .git repo is because I already
detected several confused people who don't understand the relationship
between arptables-legacy, arptables-nft and the .git repos they are
served from (and same for ebtables).

Also, worth considering that having the repo clearly stating -legacy in
the name will help raise awareness of the -nft version, which could
serve as another motivation to encourage migration.

I don't even have a strong opinion on this :-) it was just a proposal bc
I see several benefits.

> I think it's fine to apply a patch to add the "-legacy" postfix as we
> do in iptables.
> 
> Are you OK with this approach?
> 

I would apply the -legacy renaming patch regardless. We already did this
with arptables after the agreement @ NFWS. In fact, me sending the patch
now (instead of last summer) is just my lack of time to write it earlier :-)

Also, once the patch is applied, we should consider a release of both
arptables and ebtables now that iptables contains the -nft variant and
is being used in the wild.


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-04 Thread Jan Engelhardt


On Tuesday 2018-12-04 11:57, Pablo Neira Ayuso wrote:
>On Tue, Dec 04, 2018 at 11:50:46AM +0100, Arturo Borrero Gonzalez wrote:
>> On 11/28/18 2:10 PM, Arturo Borrero Gonzalez wrote:
>> > On 11/28/18 1:44 PM, Arturo Borrero Gonzalez wrote:
>> >> Hi,
>> >>
>> >> Now that the iptables.git repo offers arptables-nft and ebtables-nft,
>> >> arptables.git holds arptables-legacy, etc, why we don't just rename the
>> >> repos?
>> >>
>> >> * from arptables.git to arptables-legacy.git
>> >> * from ebtables.git to ebtables-legacy.git
>> > 
>> > please consider applying the attached patch.
>> 
>> ping :-)
>
>Phil suggested no rename of the trees, I can update the description in
>git.netfilter.org to place LEGACY there. Concern as you mentioned is
>that it may break existing links/scripts. Not sure git support
>redirections from old repo URI to new one...
>
>I think it's fine to apply a patch to add the "-legacy" postfix as we
>do in iptables.

I think it is sufficient to do one action. Whoever builds the source will run
into the name difference at some point (and that is all that is needed to raise
awareness). Given git downloads usually do not count as build, the program name
change seems more preferable to have than renaming the git repo.
(But doing both is of course not too bad either.)


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-04 Thread Pablo Neira Ayuso
On Tue, Dec 04, 2018 at 11:50:46AM +0100, Arturo Borrero Gonzalez wrote:
> On 11/28/18 2:10 PM, Arturo Borrero Gonzalez wrote:
> > On 11/28/18 1:44 PM, Arturo Borrero Gonzalez wrote:
> >> Hi,
> >>
> >> Now that the iptables.git repo offers arptables-nft and ebtables-nft,
> >> arptables.git holds arptables-legacy, etc, why we don't just rename the
> >> repos?
> >>
> >> * from arptables.git to arptables-legacy.git
> >> * from ebtables.git to ebtables-legacy.git
> >>
> >> This rename should help distros understand the differences between them
> >> and better accommodate the packaging of all the related tooling.
> >>
> >> Mind that the rename may have side effects in tarball
> >> generation/publishing etc. I would expect the new arptables tarball to
> >> include the '-legacy' keyword, and same for ebtables.
> >>
> >> If we go ahead with the rename, a new release is worth having,
> >> announcing these changes as well.
> >>
> > 
> > Also,
> > 
> > please consider applying the attached patch.
> > 
> 
> ping :-)

Phil suggested no rename of the trees, I can update the description in
git.netfilter.org to place LEGACY there. Concern as you mentioned is
that it may break existing links/scripts. Not sure git support
redirections from old repo URI to new one...

I think it's fine to apply a patch to add the "-legacy" postfix as we
do in iptables.

Are you OK with this approach?

Thanks.


Re: Proposal: rename of arptables.git and ebtables.git

2018-12-04 Thread Arturo Borrero Gonzalez
On 11/28/18 2:10 PM, Arturo Borrero Gonzalez wrote:
> On 11/28/18 1:44 PM, Arturo Borrero Gonzalez wrote:
>> Hi,
>>
>> Now that the iptables.git repo offers arptables-nft and ebtables-nft,
>> arptables.git holds arptables-legacy, etc, why we don't just rename the
>> repos?
>>
>> * from arptables.git to arptables-legacy.git
>> * from ebtables.git to ebtables-legacy.git
>>
>> This rename should help distros understand the differences between them
>> and better accommodate the packaging of all the related tooling.
>>
>> Mind that the rename may have side effects in tarball
>> generation/publishing etc. I would expect the new arptables tarball to
>> include the '-legacy' keyword, and same for ebtables.
>>
>> If we go ahead with the rename, a new release is worth having,
>> announcing these changes as well.
>>
> 
> Also,
> 
> please consider applying the attached patch.
> 

ping :-)


Re: Proposal: rename of arptables.git and ebtables.git

2018-11-28 Thread Arturo Borrero Gonzalez
On 11/28/18 1:44 PM, Arturo Borrero Gonzalez wrote:
> Hi,
> 
> Now that the iptables.git repo offers arptables-nft and ebtables-nft,
> arptables.git holds arptables-legacy, etc, why we don't just rename the
> repos?
> 
> * from arptables.git to arptables-legacy.git
> * from ebtables.git to ebtables-legacy.git
> 
> This rename should help distros understand the differences between them
> and better accommodate the packaging of all the related tooling.
> 
> Mind that the rename may have side effects in tarball
> generation/publishing etc. I would expect the new arptables tarball to
> include the '-legacy' keyword, and same for ebtables.
> 
> If we go ahead with the rename, a new release is worth having,
> announcing these changes as well.
> 

Also,

please consider applying the attached patch.

thanks.
commit ee8a588338e7c75e90fcc49a69e3d3b018063828
Author: Arturo Borrero Gonzalez 
Date:   Wed Nov 28 13:47:28 2018 +0100

ebtables: legacy renaming

The original ebtables tool is now the legacy version, let's rename it.

A more uptodate client of the ebtables tool is provided in the iptables
tarball (ebtables-nft). The new tool was formerly known as ebtables-compat.

The new -legacy binary has no problem if called via a symlink with the
'ebtables' name, so users can still name this binary with whatever name.

Signed-off-by: Arturo Borrero Gonzalez 

diff --git a/Makefile.am b/Makefile.am
index 14938fe..b16a4d6 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -26,11 +26,11 @@ AM_CPPFLAGS = ${regular_CPPFLAGS} -I${top_srcdir}/include \
 	-DEBTD_PIPE=\"${PIPE}\" -DEBTD_PIPE_DIR=\"${PIPE_DIR}\"
 AM_CFLAGS = ${regular_CFLAGS}
 
-sbin_PROGRAMS = ebtables ebtablesd ebtablesu ebtables-restore
+sbin_PROGRAMS = ebtables-legacy ebtablesd ebtablesu ebtables-legacy-restore
 EXTRA_PROGRAMS = static examples/ulog/test_ulog
 sysconf_DATA = ethertypes
-sbin_SCRIPTS = ebtables-save
-man8_MANS = ebtables.8
+sbin_SCRIPTS = ebtables-legacy-save
+man8_MANS = ebtables-legacy.8
 lib_LTLIBRARIES = libebtc.la
 
 libebtc_la_SOURCES = \
@@ -47,21 +47,22 @@ libebtc_la_SOURCES = \
 	extensions/ebtable_nat.c
 # Make sure ebtables.c can be built twice
 libebtc_la_CPPFLAGS = ${AM_CPPFLAGS}
-ebtables_SOURCES = ebtables-standalone.c
-ebtables_LDADD = libebtc.la
+ebtables_legacy_SOURCES = ebtables-standalone.c
+ebtables_legacy_LDADD = libebtc.la
 ebtablesd_LDADD = libebtc.la
-ebtables_restore_LDADD = libebtc.la
+ebtables_legacy_restore_SOURCES = ebtables-restore.c
+ebtables_legacy_restore_LDADD = libebtc.la
 static_SOURCES = ebtables.c
 static_LDFLAGS = -static
 static_LDADD = libebtc.la
 examples_ulog_test_ulog_SOURCES = examples/ulog/test_ulog.c getethertype.c
 
 daemon: ebtablesd ebtablesu
-exec: ebtables ebtables-restore
+exec: ebtables-legacy ebtables-legacy-restore
 
-CLEANFILES = ebtables-save ebtables.sysv ebtables-config ebtables.8
+CLEANFILES = ebtables-legacy-save ebtables.sysv ebtables-config ebtables-legacy.8
 
-ebtables-save: ebtables-save.in ${top_builddir}/config.status
+ebtables-legacy-save: ebtables-save.in ${top_builddir}/config.status
 	${AM_V_GEN}sed -e 's![@]sbindir@!${sbindir}!g' <$< >$@
 
 ebtables.sysv: ebtables.sysv.in ${top_builddir}/config.status
@@ -70,7 +71,7 @@ ebtables.sysv: ebtables.sysv.in ${top_builddir}/config.status
 ebtables-config: ebtables-config.in ${top_builddir}/config.status
 	${AM_V_GEN}sed -e 's![@]sysconfigdir@!${sysconfigdir}!g' <$< >$@
 
-ebtables.8: ebtables.8.in ${top_builddir}/config.status
+ebtables-legacy.8: ebtables-legacy.8.in ${top_builddir}/config.status
 	${AM_V_GEN}sed -e 's![@]PACKAGE_VERSION!${PACKAGE_VERSION}!g' \
 		-e 's![@]PACKAGE_DATE@!${PROGDATE}!g' \
 		-e 's![@]LOCKFILE@!${LOCKFILE}!g' <$< >$@
diff --git a/ebtables.8.in b/ebtables-legacy.8.in
similarity index 98%
rename from ebtables.8.in
rename to ebtables-legacy.8.in
index 3e97c84..3417045 100644
--- a/ebtables.8.in
+++ b/ebtables-legacy.8.in
@@ -24,7 +24,7 @@
 .\" 
 .\"
 .SH NAME
-ebtables (@PACKAGE_VERSION@) \- Ethernet bridge frame table administration
+ebtables-legacy (@PACKAGE_VERSION@) \- Ethernet bridge frame table administration (legacy)
 .SH SYNOPSIS
 .BR "ebtables " [ -t " table ] " - [ ACDI "] chain rule specification [match extensions] [watcher extensions] target"
 .br
@@ -50,6 +50,18 @@ ebtables (@PACKAGE_VERSION@) \- Ethernet bridge frame table administration
 .br
 .BR "ebtables " [ -t " table ] [" --atomic-file " file] " --atomic-save
 .br
+
+.SH LEGACY
+This tool uses the old xtables/setsockopt framework, and is a legacy version
+of ebtables. That means that a new, more modern tool exists with the same
+functionality using the nf_tables framework and you are encouraged to migrate now.
+The new binaries (known as ebtables-nft and formerly known as ebtables-compat)
+uses the same syntax and semantics than this legacy one.
+
+You can still use this legacy tool. You should probably get some specific
+information from your Linux distribution or vendor.
+More docs are ava

Proposal: rename of arptables.git and ebtables.git

2018-11-28 Thread Arturo Borrero Gonzalez
Hi,

Now that the iptables.git repo offers arptables-nft and ebtables-nft,
arptables.git holds arptables-legacy, etc, why we don't just rename the
repos?

* from arptables.git to arptables-legacy.git
* from ebtables.git to ebtables-legacy.git

This rename should help distros understand the differences between them
and better accommodate the packaging of all the related tooling.

Mind that the rename may have side effects in tarball
generation/publishing etc. I would expect the new arptables tarball to
include the '-legacy' keyword, and same for ebtables.

If we go ahead with the rename, a new release is worth having,
announcing these changes as well.


Proposal

2018-04-16 Thread MS Zeliha Omer Faruk



Hello

Greeetings to you please did you get my previous email regarding my
investment proposal last week friday ?

MS.Zeliha ömer faruk
zeliha.omer.fa...@gmail.com

--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Proposal: Add config option to set xtable_lock wait = true.

2018-04-04 Thread Jack Ma

Hi Florian & Pablo,

I noticed that lots iptables users are likely to miss the '-w' option while 
implementing multi-process program.

Due to the fact that the iptables and ip6tables do not wait for the 
xtable_lock, people can easily mis-configure

their iptables command because of concurrency issues.

I'd like to propose a global config option to set the default wait interval and 
allow iptables to always wait for the lock.


ie. 

" iptables --always-wait (ms) " if no value is specified, then use the default 
1 second.

I found it hard to see any users who may wish to run iptables command without 
lock.

Does this proposal sound sane-ish ?

Regards,
Jack

--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


nftables patch proposal: debug_mask propagate through cache_update() just as it is.

2018-03-13 Thread nozzy123nozzy
Hi nft developers, 

 I would like to propose this patch to netfilter.

 This patch aims that all the "--debug" levels of nft are treated as it
is in cache_update(). 

 Currently, nft seems to omit any  debug level except for "netlink"
level through cache_update(). It is not convenient to check all packets
also generated by cache_update().

  Example: 
   "nft --debug mnl list ruleset" doesn't show any debug
information. With this patch, nft can show mnl debug information.It is 
convenient for debug. (at least convenient to me.)

 How about this patch? I'm glad if you accept this patch.

Thank you in advance,

Takahide Nojima.

 -patch is here
>From fbdf4d73328580031e1e68b6a163f640330253b9 Mon Sep 17 00:00:00 2001
From: Takahide Nojima 
Date: Sat, 10 Mar 2018 15:36:30 +0900
Subject: debug_mask parameter pass through to cache_update()

Signed-off-by: Takahide Nojima 
---
 include/rule.h |  2 +-
 src/evaluate.c | 22 +++---
 src/netlink.c  |  2 +-
 src/rule.c |  4 ++--
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/rule.h b/include/rule.h
index 86f7281..769c54c 100644
--- a/include/rule.h
+++ b/include/rule.h
@@ -552,7 +552,7 @@ struct netlink_ctx;
 extern int do_command(struct netlink_ctx *ctx, struct cmd *cmd);
 
 extern int cache_update(struct mnl_socket *nf_sock, struct nft_cache
*cache,
-   enum cmd_ops cmd, struct list_head *msgs, bool
debug,
+   enum cmd_ops cmd, struct list_head *msgs,
unsigned int debug_mask,
struct output_ctx *octx);
 extern void cache_flush(struct list_head *table_list);
 extern void cache_release(struct nft_cache *cache);
diff --git a/src/evaluate.c b/src/evaluate.c
index a2c1c72..097d0a1 100644
--- a/src/evaluate.c
+++ b/src/evaluate.c
@@ -184,7 +184,7 @@ static int expr_evaluate_symbol(struct eval_ctx
*ctx, struct expr **expr)
break;
case SYMBOL_SET:
ret = cache_update(ctx->nf_sock, ctx->cache, ctx->cmd-
>op,
-  ctx->msgs, ctx->debug_mask &
NFT_DEBUG_NETLINK, ctx->octx);
+  ctx->msgs, ctx->debug_mask, ctx-
>octx);
if (ret < 0)
return ret;
 
@@ -3076,14 +3076,14 @@ static int cmd_evaluate_add(struct eval_ctx
*ctx, struct cmd *cmd)
switch (cmd->obj) {
case CMD_OBJ_SETELEM:
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op,
-  ctx->msgs, ctx->debug_mask &
NFT_DEBUG_NETLINK, ctx->octx);
+  ctx->msgs, ctx->debug_mask, ctx-
>octx);
if (ret < 0)
return ret;
 
return setelem_evaluate(ctx, &cmd->expr);
case CMD_OBJ_SET:
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op,
-  ctx->msgs, ctx->debug_mask &
NFT_DEBUG_NETLINK, ctx->octx);
+  ctx->msgs, ctx->debug_mask, ctx-
>octx);
if (ret < 0)
return ret;
 
@@ -3094,7 +3094,7 @@ static int cmd_evaluate_add(struct eval_ctx *ctx,
struct cmd *cmd)
return rule_evaluate(ctx, cmd->rule);
case CMD_OBJ_CHAIN:
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op,
-  ctx->msgs, ctx->debug_mask &
NFT_DEBUG_NETLINK, ctx->octx);
+  ctx->msgs, ctx->debug_mask, ctx-
>octx);
if (ret < 0)
return ret;
 
@@ -3126,7 +3126,7 @@ static int cmd_evaluate_delete(struct eval_ctx
*ctx, struct cmd *cmd)
switch (cmd->obj) {
case CMD_OBJ_SETELEM:
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op,
-  ctx->msgs, ctx->debug_mask &
NFT_DEBUG_NETLINK, ctx->octx);
+  ctx->msgs, ctx->debug_mask, ctx-
>octx);
if (ret < 0)
return ret;
 
@@ -3153,7 +3153,7 @@ static int cmd_evaluate_get(struct eval_ctx *ctx,
struct cmd *cmd)
int ret;
 
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op, ctx-
>msgs,
-  ctx->debug_mask & NFT_DEBUG_NETLINK, ctx-
>octx);
+  ctx->debug_mask, ctx->octx);
if (ret < 0)
return ret;
 
@@ -3199,7 +3199,7 @@ static int cmd_evaluate_list(struct eval_ctx
*ctx, struct cmd *cmd)
int ret;
 
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op, ctx-
>msgs,
-  ctx->debug_mask & NFT_DEBUG_NETLINK, ctx-
>octx);
+  ctx->debug_mask, ctx->octx);
if (ret < 0)
return ret;
 
@@ -3287,7 +3287,7 @@ static int cmd_evaluate_reset(struct eval_ctx
*ctx, struct cmd *cmd)
int ret;
 
ret = cache_update(ctx->nf_sock, ctx->cache, cmd->op, ctx-
>

[ulog2] Plugin ulogd_filter_HTTPSNIFF proposal

2018-01-12 Thread Jean Weisbuch
I created a filter plugin for ulogd2 that does retrieve informations 
from HTTP request, it is similar to ulogd_filter_PWSNIFF (on which the 
code is based) which allows to monitor/log HTTP requests sent/recieved 
by the system.
Other solutions based on PCAP allows to log HTTP queries but not along 
with informations such as the UID that did sent the packet which can be 
useful to which users are executing virus/hacked scripts.


Output of "ulogd --info ulogd_filter_HTTPSNIFF.so" :
    Input keys:
        Key: raw.pkt (raw data)
    Output keys:
        Key: httpsniff.host (string)
        Key: httpsniff.uri (string)
        Key: httpsniff.method (unsigned int 8)


The code is available at : https://github.com/jb-boin/ulogd_filter_HTTPSNIFF

It requires both https://bugzilla.netfilter.org/show_bug.cgi?id=1192 and 
https://bugzilla.netfilter.org/show_bug.cgi?id=1193 to be fixed for it 
to work correctly.

I included proposed patchs for those two bugs.


I have been using it with a MariaDB output for about 3 months on more 
than 30 Debian Jessie hosts, ulogd2 compiled as packages using the 
Debian Stretch 2.0.5-5 source package and i didnt hit any issue so far.

--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal

2018-01-10 Thread Phil Sutter
Hi Mark,

On Tue, Jan 09, 2018 at 10:46:14PM -0600, mark diener wrote:
> Why don't you just put a JSON layer above the c-based libnftl 0.9 ?
> 
> That way, whatever is working in C-based API can then get JSON support
> and disrupt the apple cart.
> 
> Call it libnftljson-0.9.so, which is then dependent on libnftl-0.9.so
> 
> But keep the c-based api the c-based api
> and the JSON calling translater-to-c api a different optional library.

I'm not sure I follow. The reason we started working on this libnftables
in the first place was that everyone considered libnftnl way too
low-level for direct use in applications.

Are you suggesting to basically reimplement libnftables with JSON
interfacing in both directions?

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal

2018-01-09 Thread mark diener
Why don't you just put a JSON layer above the c-based libnftl 0.9 ?

That way, whatever is working in C-based API can then get JSON support
and disrupt the apple cart.

Call it libnftljson-0.9.so, which is then dependent on libnftl-0.9.so

But keep the c-based api the c-based api
and the JSON calling translater-to-c api a different optional library.


Good idea?  Already done?

On Tue, Jan 9, 2018 at 5:31 PM, Pablo Neira Ayuso  wrote:
> On Fri, Jan 05, 2018 at 06:52:03PM +0100, Phil Sutter wrote:
>> Hi Pablo,
>>
>> On Tue, Jan 02, 2018 at 07:02:19PM +0100, Pablo Neira Ayuso wrote:
>> > On Fri, Dec 29, 2017 at 03:58:16PM +0100, Phil Sutter wrote:
>> > > On Thu, Dec 28, 2017 at 08:21:41PM +0100, Pablo Neira Ayuso wrote:
>> > > > On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
>> [...]
>> > > > > But isn't the problem of keeping the API compatible comparable to
>> > > > > the problem of keeping the JSON representation compatible?
>> > > >
>> > > > Well, keeping backward compatibility is always a burden to carry on.
>> > > > Even though, to me, JSON is as extensible as netlink is, ie. we can
>> > > > add new keys and deprecate things without breaking backward.
>> > >
>> > > Yes, the format is very flexible. But how does that help with
>> > > compatibility? You'll have to support the deprecated attributes or JSON
>> > > without the new attributes either way, no?
>> >
>> > Probably it's my subjective judging that maintaing json layout will be
>> > easier rather than a large bunch of APIs and internal object
>> > representations through getter/setter.
>> >
>> > Anyway, these days, we expect people to use modern languages to build
>> > upper layers in the stack, right? And that seems to mix well with JSON.
>> > Again core infrastructure person here talking about upper layers, so
>> > take this lightly ;-).
>>
>> Yeah, for firewalld at least JSON is not a disadvantage. Not sure about
>> projects written in C though, but that's a different kettle of fish. :)
>
> It seems to me many of these new control plane/orchestration software
> is not done in C, so this representation can be also useful to them.
>
>> > [...]
>> > > > Oh, I can help you on that. Although you're very much closer to
>> > > > firewalld usecases that I'm, so probably a draft from you on this
>> > > > would be nice ;-)
>> > >
>> > > I went ahead and converted my local ruleset into JSON manually (until I
>> > > got bored), see attached ruleset.json. Then I wrote a schema to validate
>> > > it (also attached). Please let me know if you're OK with the base
>> > > format at least, i.e. everything except expressions and statements. Any
>> > > feedback on the few statements and expressions I put in there is highly
>> > > appreciated as well, of course! :)
>> >
>> > Probably instead of having left and right, we can replace it by:
>> >
>> > "match" : {
>> > "key": {
>> > ...
>> > },
>> > "value": "lo"
>> > }
>> >
>> > Then, allow to place expressions as "value" when it comes from a set
>> > lookup.
>>
>> Yes, JSON Schema allows to define multiple possible types for an
>> attribute (see #/definitions/expression/val for instance). But I don't
>> follow regarding set lookup: There are other uses for an expression on
>> RHS as well, no?
>>
>> > > > > > Regarding asynchronism between input and output, not sure I follow.
>> > > > >
>> > > > > I am grepping through tests/py/*/*.t for pattern 'ok;':
>> > > > >
>> > > > > - Adjacent ranges are combined.
>> > > >
>> > > > We should disable this, there was some discussion on this recently.
>> > > > User should request this explicitly to happen through some knob.
>> > >
>> > > I agree. While it's a nice idea, adding two adjacent ranges and later
>> > > wanting to delete one of them is troublesome. Do you have a name in mind
>> > > for that knob? Maybe something more generic which could cover other
>> > > cases of overly intelligent nft (in the future) as well?
>> >
>> > Probably "coalesce" or "merge".
>>
>> So you'd prefer a specific flag just for that feature? I had a more
>> generic one in mind, something like "optimize" for instance.
>
> At this stage, I'm consider to disable range automerge before 0.8.1 is
> released. So we can revisit this later on with something that users
> will explicitly enable on demand.
>
>> [...]
>> > > > > - meta iif "lo" accept;ok;iif "lo" accept
>> > > > >   -> Maybe less abstraction?
>> > > >
>> > > > This is just dealing with something that is causing us problems, that
>> > > > is, iif is handled as primary key, so we cannot reuse it in the
>> > > > grammar given it results in shift-reduce conflicts.
>> > >
>> > > The question is why allow both variants then? Since 'iif' is being used
>> > > as fib_flag as well, using 'iif' alone should be deprecated. Or is this
>> > > a case of backwards compatibility?
>> >
>> > It was compromise solution, not to break syntax all of a sudden,
>> > allowing old and new ways for a little while. But thi

Re: libnftables extended API proposal

2018-01-09 Thread Pablo Neira Ayuso
On Fri, Jan 05, 2018 at 06:52:03PM +0100, Phil Sutter wrote:
> Hi Pablo,
> 
> On Tue, Jan 02, 2018 at 07:02:19PM +0100, Pablo Neira Ayuso wrote:
> > On Fri, Dec 29, 2017 at 03:58:16PM +0100, Phil Sutter wrote:
> > > On Thu, Dec 28, 2017 at 08:21:41PM +0100, Pablo Neira Ayuso wrote:
> > > > On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
> [...]
> > > > > But isn't the problem of keeping the API compatible comparable to
> > > > > the problem of keeping the JSON representation compatible?
> > > > 
> > > > Well, keeping backward compatibility is always a burden to carry on.
> > > > Even though, to me, JSON is as extensible as netlink is, ie. we can
> > > > add new keys and deprecate things without breaking backward.
> > > 
> > > Yes, the format is very flexible. But how does that help with
> > > compatibility? You'll have to support the deprecated attributes or JSON
> > > without the new attributes either way, no?
> > 
> > Probably it's my subjective judging that maintaing json layout will be
> > easier rather than a large bunch of APIs and internal object
> > representations through getter/setter.
> > 
> > Anyway, these days, we expect people to use modern languages to build
> > upper layers in the stack, right? And that seems to mix well with JSON.
> > Again core infrastructure person here talking about upper layers, so
> > take this lightly ;-).
> 
> Yeah, for firewalld at least JSON is not a disadvantage. Not sure about
> projects written in C though, but that's a different kettle of fish. :)

It seems to me many of these new control plane/orchestration software
is not done in C, so this representation can be also useful to them.

> > [...]
> > > > Oh, I can help you on that. Although you're very much closer to
> > > > firewalld usecases that I'm, so probably a draft from you on this
> > > > would be nice ;-)
> > > 
> > > I went ahead and converted my local ruleset into JSON manually (until I
> > > got bored), see attached ruleset.json. Then I wrote a schema to validate
> > > it (also attached). Please let me know if you're OK with the base
> > > format at least, i.e. everything except expressions and statements. Any
> > > feedback on the few statements and expressions I put in there is highly
> > > appreciated as well, of course! :)
> > 
> > Probably instead of having left and right, we can replace it by:
> > 
> > "match" : {
> > "key": {
> > ...
> > },
> > "value": "lo"
> > }
> > 
> > Then, allow to place expressions as "value" when it comes from a set
> > lookup.
> 
> Yes, JSON Schema allows to define multiple possible types for an
> attribute (see #/definitions/expression/val for instance). But I don't
> follow regarding set lookup: There are other uses for an expression on
> RHS as well, no?
> 
> > > > > > Regarding asynchronism between input and output, not sure I follow.
> > > > > 
> > > > > I am grepping through tests/py/*/*.t for pattern 'ok;':
> > > > > 
> > > > > - Adjacent ranges are combined.
> > > > 
> > > > We should disable this, there was some discussion on this recently.
> > > > User should request this explicitly to happen through some knob.
> > > 
> > > I agree. While it's a nice idea, adding two adjacent ranges and later
> > > wanting to delete one of them is troublesome. Do you have a name in mind
> > > for that knob? Maybe something more generic which could cover other
> > > cases of overly intelligent nft (in the future) as well?
> > 
> > Probably "coalesce" or "merge".
> 
> So you'd prefer a specific flag just for that feature? I had a more
> generic one in mind, something like "optimize" for instance.

At this stage, I'm consider to disable range automerge before 0.8.1 is
released. So we can revisit this later on with something that users
will explicitly enable on demand.

> [...]
> > > > > - meta iif "lo" accept;ok;iif "lo" accept
> > > > >   -> Maybe less abstraction?
> > > > 
> > > > This is just dealing with something that is causing us problems, that
> > > > is, iif is handled as primary key, so we cannot reuse it in the
> > > > grammar given it results in shift-reduce conflicts.
> > > 
> > > The question is why allow both variants then? Since 'iif' is being used
> > > as fib_flag as well, using 'iif' alone should be deprecated. Or is this
> > > a case of backwards compatibility?
> > 
> > It was compromise solution, not to break syntax all of a sudden,
> > allowing old and new ways for a little while. But this one, I think it
> > was not add this.
> 
> I couldn't parse your last sentence here. :)

Sorry, I mean. 'meta iff' came first, then 'iif' was added. To avoid
breaking things, the old meta iff has been preserved.

> > > > > - tcp dport 22 iiftype ether ip daddr 1.2.3.4 ether saddr 
> > > > > 00:0f:54:0c:11:4 accept ok;tcp dport 22 ether saddr 00:0f:54:0c:11:04 
> > > > > ip daddr 1.2.3.4 accept
> > > > >   -> Here something is "optimized out", not sure if it should be kept 
> > > > > in
> > > > >   JSON.
> > > > 
> > 

Re: libnftables extended API proposal

2018-01-05 Thread Phil Sutter
Hi Pablo,

On Tue, Jan 02, 2018 at 07:02:19PM +0100, Pablo Neira Ayuso wrote:
> On Fri, Dec 29, 2017 at 03:58:16PM +0100, Phil Sutter wrote:
> > On Thu, Dec 28, 2017 at 08:21:41PM +0100, Pablo Neira Ayuso wrote:
> > > On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
[...]
> > > > But isn't the problem of keeping the API compatible comparable to
> > > > the problem of keeping the JSON representation compatible?
> > > 
> > > Well, keeping backward compatibility is always a burden to carry on.
> > > Even though, to me, JSON is as extensible as netlink is, ie. we can
> > > add new keys and deprecate things without breaking backward.
> > 
> > Yes, the format is very flexible. But how does that help with
> > compatibility? You'll have to support the deprecated attributes or JSON
> > without the new attributes either way, no?
> 
> Probably it's my subjective judging that maintaing json layout will be
> easier rather than a large bunch of APIs and internal object
> representations through getter/setter.
> 
> Anyway, these days, we expect people to use modern languages to build
> upper layers in the stack, right? And that seems to mix well with JSON.
> Again core infrastructure person here talking about upper layers, so
> take this lightly ;-).

Yeah, for firewalld at least JSON is not a disadvantage. Not sure about
projects written in C though, but that's a different kettle of fish. :)

> [...]
> > > Oh, I can help you on that. Although you're very much closer to
> > > firewalld usecases that I'm, so probably a draft from you on this
> > > would be nice ;-)
> > 
> > I went ahead and converted my local ruleset into JSON manually (until I
> > got bored), see attached ruleset.json. Then I wrote a schema to validate
> > it (also attached). Please let me know if you're OK with the base
> > format at least, i.e. everything except expressions and statements. Any
> > feedback on the few statements and expressions I put in there is highly
> > appreciated as well, of course! :)
> 
> Probably instead of having left and right, we can replace it by:
> 
> "match" : {
> "key": {
> ...
> },
> "value": "lo"
> }
> 
> Then, allow to place expressions as "value" when it comes from a set
> lookup.

Yes, JSON Schema allows to define multiple possible types for an
attribute (see #/definitions/expression/val for instance). But I don't
follow regarding set lookup: There are other uses for an expression on
RHS as well, no?

> > > > > Regarding asynchronism between input and output, not sure I follow.
> > > > 
> > > > I am grepping through tests/py/*/*.t for pattern 'ok;':
> > > > 
> > > > - Adjacent ranges are combined.
> > > 
> > > We should disable this, there was some discussion on this recently.
> > > User should request this explicitly to happen through some knob.
> > 
> > I agree. While it's a nice idea, adding two adjacent ranges and later
> > wanting to delete one of them is troublesome. Do you have a name in mind
> > for that knob? Maybe something more generic which could cover other
> > cases of overly intelligent nft (in the future) as well?
> 
> Probably "coalesce" or "merge".

So you'd prefer a specific flag just for that feature? I had a more
generic one in mind, something like "optimize" for instance.

[...]
> > > > - meta iif "lo" accept;ok;iif "lo" accept
> > > >   -> Maybe less abstraction?
> > > 
> > > This is just dealing with something that is causing us problems, that
> > > is, iif is handled as primary key, so we cannot reuse it in the
> > > grammar given it results in shift-reduce conflicts.
> > 
> > The question is why allow both variants then? Since 'iif' is being used
> > as fib_flag as well, using 'iif' alone should be deprecated. Or is this
> > a case of backwards compatibility?
> 
> It was compromise solution, not to break syntax all of a sudden,
> allowing old and new ways for a little while. But this one, I think it
> was not add this.

I couldn't parse your last sentence here. :)

> > > > - tcp dport 22 iiftype ether ip daddr 1.2.3.4 ether saddr 
> > > > 00:0f:54:0c:11:4 accept ok;tcp dport 22 ether saddr 00:0f:54:0c:11:04 
> > > > ip daddr 1.2.3.4 accept
> > > >   -> Here something is "optimized out", not sure if it should be kept in
> > > >   JSON.
> > > 
> > > This is testing redudant information, that we can remove given we can
> > > infer it.
> > 
> > Yeah, similar situation to the 'meta l4proto' one above. The (still)
> > tricky part is to communicate assigned handles back to the application.
> > Maybe we could return an exact copy of their JSON with just handle
> > properties added?
> 
> Yes, that should be fine. We already have the NLM_F_ECHO in place,
> just a matter of supporting json there.
> 
> BTW, a simple testsuite for this would be good too.

Sure! Maybe the existing data in tests/py could be reused (the *.payload
files at least).

> P.S: Happy new year BTW.

Thanks, likewise! :)

Cheers, Phil
--
To unsubscribe from this list: send the line

Re: libnftables extended API proposal

2018-01-02 Thread Pablo Neira Ayuso
Hi Phil,

On Fri, Dec 29, 2017 at 03:58:16PM +0100, Phil Sutter wrote:
> On Thu, Dec 28, 2017 at 08:21:41PM +0100, Pablo Neira Ayuso wrote:
> > On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
[...]
> > Yes, that would place a bit more work on the library, but I think we
> > should provide a high level representation that makes it easy for
> > people to express things. People should not deal with bitfield
> > handling, that's very tricky. Just have a look at commits that I and
> > Florian made to fix this. We don't want users to fall into this trap
> > by creating incorrect expressions, or worse, making them feel our
> > library does not work (even if it's their own mistake to build
> > incorrect expressions).
> 
> I wonder if it made sense to provide JSON generating helpers. Maybe even
> just for sample purposes.

If you add them, I prefer you keep them private internally, not
exposed through API.

[...]
> > > But isn't the problem of keeping the API compatible comparable to
> > > the problem of keeping the JSON representation compatible?
> > 
> > Well, keeping backward compatibility is always a burden to carry on.
> > Even though, to me, JSON is as extensible as netlink is, ie. we can
> > add new keys and deprecate things without breaking backward.
> 
> Yes, the format is very flexible. But how does that help with
> compatibility? You'll have to support the deprecated attributes or JSON
> without the new attributes either way, no?

Probably it's my subjective judging that maintaing json layout will be
easier rather than a large bunch of APIs and internal object
representations through getter/setter.

Anyway, these days, we expect people to use modern languages to build
upper layers in the stack, right? And that seems to mix well with JSON.
Again core infrastructure person here talking about upper layers, so
take this lightly ;-).

[...]
> > Oh, I can help you on that. Although you're very much closer to
> > firewalld usecases that I'm, so probably a draft from you on this
> > would be nice ;-)
> 
> I went ahead and converted my local ruleset into JSON manually (until I
> got bored), see attached ruleset.json. Then I wrote a schema to validate
> it (also attached). Please let me know if you're OK with the base
> format at least, i.e. everything except expressions and statements. Any
> feedback on the few statements and expressions I put in there is highly
> appreciated as well, of course! :)

Probably instead of having left and right, we can replace it by:

"match" : {
"key": {
...
},
"value": "lo"
}

Then, allow to place expressions as "value" when it comes from a set
lookup.

> > > > Regarding asynchronism between input and output, not sure I follow.
> > > 
> > > I am grepping through tests/py/*/*.t for pattern 'ok;':
> > > 
> > > - Adjacent ranges are combined.
> > 
> > We should disable this, there was some discussion on this recently.
> > User should request this explicitly to happen through some knob.
> 
> I agree. While it's a nice idea, adding two adjacent ranges and later
> wanting to delete one of them is troublesome. Do you have a name in mind
> for that knob? Maybe something more generic which could cover other
> cases of overly intelligent nft (in the future) as well?

Probably "coalesce" or "merge".

> > > - meta l4proto ipv6-icmp icmpv6 type nd-router-advert;ok;icmpv6 type 
> > > nd-router-advert
> > >   -> more abstraction needed.
> > 
> > Not sure what you mean.
> 
> I meant 'icmpv6 ...' should imply 'meta l4proto ipv6-icmp', but that's
> the case already. Maybe this is a similar case as the combining of
> adjacent ranges in that it's a good thing per se but might not suit
> users' preferences.

This is an implicit dependency, right? I think this test just makes sure
that we handle an explicit dependencies - in case the user decides to be
too verbose - just fine.

> > > - meta priority 0x87654321;ok;meta priority 8765:4321
> > >   -> What the heck? :)
> > 
> > This is testing that we accept basetypes, given classid is an integer
> > basetype.
> 
> Ah, I see.
> 
> > > - meta iif "lo" accept;ok;iif "lo" accept
> > >   -> Maybe less abstraction?
> > 
> > This is just dealing with something that is causing us problems, that
> > is, iif is handled as primary key, so we cannot reuse it in the
> > grammar given it results in shift-reduce conflicts.
> 
> The question is why allow both variants then? Since 'iif' is being used
> as fib_flag as well, using 'iif' alone should be deprecated. Or is this
> a case of backwards compatibility?

It was compromise solution, not to break syntax all of a sudden,
allowing old and new ways for a little while. But this one, I think it
was not add this.

> Anyway, this won't be an issue in JSON at least.
> 
> > > - tcp dport 22 iiftype ether ip daddr 1.2.3.4 ether saddr 
> > > 00:0f:54:0c:11:4 accept ok;tcp dport 22 ether saddr 00:0f:54:0c:11:04 ip 
> > > daddr 1.2.3.4 accept
> > >   -> Here something is "optimiz

Re: libnftables extended API proposal

2017-12-29 Thread Phil Sutter
On Thu, Dec 28, 2017 at 08:21:41PM +0100, Pablo Neira Ayuso wrote:
> Hi Phil,
> 
> On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
> > On Fri, Dec 22, 2017 at 09:39:03PM +0100, Pablo Neira Ayuso wrote:
> > > On Fri, Dec 22, 2017 at 04:30:49PM +0100, Phil Sutter wrote:
> > > > Hi Pablo,
> > > > 
> > > > On Fri, Dec 22, 2017 at 02:49:06PM +0100, Pablo Neira Ayuso wrote:
> > > > > On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> > > > > > On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > > > > > > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > > > > > > [...]
> > > > > > > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso 
> > > > > > > > wrote:
> > > > > > > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > > > > [...]
> > > > > > > > > I wonder if firewalld could generate high level json 
> > > > > > > > > representation
> > > > > > > > > instead, so it becomes a compiler/translator from its own
> > > > > > > > > representation to nftables abstract syntax tree. As I said, 
> > > > > > > > > the json
> > > > > > > > > representation is mapping to the abstract syntax tree we have 
> > > > > > > > > in nft.
> > > > > > > > > I'm refering to the high level json representation that 
> > > > > > > > > doesn't exist
> > > > > > > > > yet, not the low level one for libnftnl.
> > > > > > > > 
> > > > > > > > Can you point me to some information about that high level JSON
> > > > > > > > representation? Seems I'm missing something here.
> > > > > > > 
> > > > > > > It doesn't exist :-), if we expose a json-based API, third party 
> > > > > > > tool
> > > > > > > only have to generate the json high-level representation, we would
> > > > > > > need very few API calls for this, and anyone could generate 
> > > > > > > rulesets
> > > > > > > for nftables, without relying on the bison parser, given the json
> > > > > > > representation exposes the abstract syntax tree.
> > > > > > 
> > > > > > So you're idea is to accept a whole command in JSON format from
> > > > > > applications? And output in JSON format as well since that is 
> > > > > > easier for
> > > > > > parsing than human readable text we have right now?
> > > > > 
> > > > > Just brainstorming here, we're discussing an API for third party
> > > > > applications. In this case, they just need to build the json
> > > > > representation for the ruleset they want to add. They could even embed
> > > > > this into a network message that they can send of the wire.
> > > > > 
> > > > > > I'm not sure about the '[ base, offset, length ]' part though:
> > > > > > Applications would have to care about protocol header layout 
> > > > > > including
> > > > > > any specialties themselves, or should libnftables provide them with
> > > > > > convenience functions to generate the correct JSON markup?
> > > > > 
> > > > > It depends, you may want to expose json representations for each
> > > > > protocol payload you support.
> > > > > 
> > > > > > For simple stuff like matching on a TCP port there's probably no
> > > > > > need, but correctly interpreting IPv4 ToS field is rather
> > > > > > error-prone I guess.
> > > > > 
> > > > > And bitfields are going to be cumbersome too, so we would indeed need
> > > > > a json representation for each protocol that we support, so third
> > > > > party applications don't need to deal with this.
> > > > > 
> > > > > > The approach seems simple at first, but application input in JSON 
> > > > > > format
> > > > > > has to be validated as well, so I fear we'll end up with a second 
> > > > > > parser
> > > > > > to avoid the first one.
> > > > > 
> > > > > There are libraries like jansson that already do the parsing for us,
> > > > > so we don't need to maintain our own json parser. We would still need
> > > > > internal code to libnftables, to navigate the json representation and
> > > > > create the objects.
> > > > 
> > > > Yes sure, there are libraries doing the actual parsing of JSON -
> > > > probably I wasn't clear enough. My point is what happens if you have a
> > > > parsed JSON tree (or array, however it may look like in practice). The
> > > > data sent by the application is either explicit enough for the
> > > > translation into netlink messages to be really trivial, or it is not
> > > > (which I prefer, otherwise applications could use libnftnl directly with
> > > > no drawback) - then we still have to implement a middle layer between
> > > > data in JSON and nftables objects. Maybe an example will do:
> > > > 
> > > > | [{
> > > > |   "type": "relational",
> > > > |   "left": {
> > > > |   "type": "expression",
> > > > |   "name": "tcp_hdr_expr",
> > > > |   "value": {
> > > > |   "type": "tcp_hdr_field",
> > > > |   "value": "dport",
> > > > |   },
> > > > |   },
> > > > |   "right": {
> > > > |   "type": "expression",
>

Re: libnftables extended API proposal

2017-12-28 Thread Pablo Neira Ayuso
Hi Phil,

On Sat, Dec 23, 2017 at 02:19:41PM +0100, Phil Sutter wrote:
> On Fri, Dec 22, 2017 at 09:39:03PM +0100, Pablo Neira Ayuso wrote:
> > On Fri, Dec 22, 2017 at 04:30:49PM +0100, Phil Sutter wrote:
> > > Hi Pablo,
> > > 
> > > On Fri, Dec 22, 2017 at 02:49:06PM +0100, Pablo Neira Ayuso wrote:
> > > > On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> > > > > On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > > > > > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > > > > > [...]
> > > > > > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > > > > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > > > [...]
> > > > > > > > I wonder if firewalld could generate high level json 
> > > > > > > > representation
> > > > > > > > instead, so it becomes a compiler/translator from its own
> > > > > > > > representation to nftables abstract syntax tree. As I said, the 
> > > > > > > > json
> > > > > > > > representation is mapping to the abstract syntax tree we have 
> > > > > > > > in nft.
> > > > > > > > I'm refering to the high level json representation that doesn't 
> > > > > > > > exist
> > > > > > > > yet, not the low level one for libnftnl.
> > > > > > > 
> > > > > > > Can you point me to some information about that high level JSON
> > > > > > > representation? Seems I'm missing something here.
> > > > > > 
> > > > > > It doesn't exist :-), if we expose a json-based API, third party 
> > > > > > tool
> > > > > > only have to generate the json high-level representation, we would
> > > > > > need very few API calls for this, and anyone could generate rulesets
> > > > > > for nftables, without relying on the bison parser, given the json
> > > > > > representation exposes the abstract syntax tree.
> > > > > 
> > > > > So you're idea is to accept a whole command in JSON format from
> > > > > applications? And output in JSON format as well since that is easier 
> > > > > for
> > > > > parsing than human readable text we have right now?
> > > > 
> > > > Just brainstorming here, we're discussing an API for third party
> > > > applications. In this case, they just need to build the json
> > > > representation for the ruleset they want to add. They could even embed
> > > > this into a network message that they can send of the wire.
> > > > 
> > > > > I'm not sure about the '[ base, offset, length ]' part though:
> > > > > Applications would have to care about protocol header layout including
> > > > > any specialties themselves, or should libnftables provide them with
> > > > > convenience functions to generate the correct JSON markup?
> > > > 
> > > > It depends, you may want to expose json representations for each
> > > > protocol payload you support.
> > > > 
> > > > > For simple stuff like matching on a TCP port there's probably no
> > > > > need, but correctly interpreting IPv4 ToS field is rather
> > > > > error-prone I guess.
> > > > 
> > > > And bitfields are going to be cumbersome too, so we would indeed need
> > > > a json representation for each protocol that we support, so third
> > > > party applications don't need to deal with this.
> > > > 
> > > > > The approach seems simple at first, but application input in JSON 
> > > > > format
> > > > > has to be validated as well, so I fear we'll end up with a second 
> > > > > parser
> > > > > to avoid the first one.
> > > > 
> > > > There are libraries like jansson that already do the parsing for us,
> > > > so we don't need to maintain our own json parser. We would still need
> > > > internal code to libnftables, to navigate the json representation and
> > > > create the objects.
> > > 
> > > Yes sure, there are libraries doing the actual parsing of JSON -
> > > probably I wasn't clear enough. My point is what happens if you have a
> > > parsed JSON tree (or array, however it may look like in practice). The
> > > data sent by the application is either explicit enough for the
> > > translation into netlink messages to be really trivial, or it is not
> > > (which I prefer, otherwise applications could use libnftnl directly with
> > > no drawback) - then we still have to implement a middle layer between
> > > data in JSON and nftables objects. Maybe an example will do:
> > > 
> > > | [{
> > > | "type": "relational",
> > > | "left": {
> > > | "type": "expression",
> > > | "name": "tcp_hdr_expr",
> > > | "value": {
> > > | "type": "tcp_hdr_field",
> > > | "value": "dport",
> > > | },
> > > | },
> > > | "right": {
> > > | "type": "expression",
> > > | "name": "integer_expr",
> > > | "value": 22,
> > > | }
> > > | }]
> > 
> > Probably something more simple representation, like this?
> > 
> > [{
> > "match": {
> > "left": {
> > "type

Re: libnftables extended API proposal

2017-12-23 Thread Phil Sutter
On Fri, Dec 22, 2017 at 09:39:03PM +0100, Pablo Neira Ayuso wrote:
> On Fri, Dec 22, 2017 at 04:30:49PM +0100, Phil Sutter wrote:
> > Hi Pablo,
> > 
> > On Fri, Dec 22, 2017 at 02:49:06PM +0100, Pablo Neira Ayuso wrote:
> > > On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> > > > On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > > > > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > > > > [...]
> > > > > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > > > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > > [...]
> > > > > > > I wonder if firewalld could generate high level json 
> > > > > > > representation
> > > > > > > instead, so it becomes a compiler/translator from its own
> > > > > > > representation to nftables abstract syntax tree. As I said, the 
> > > > > > > json
> > > > > > > representation is mapping to the abstract syntax tree we have in 
> > > > > > > nft.
> > > > > > > I'm refering to the high level json representation that doesn't 
> > > > > > > exist
> > > > > > > yet, not the low level one for libnftnl.
> > > > > > 
> > > > > > Can you point me to some information about that high level JSON
> > > > > > representation? Seems I'm missing something here.
> > > > > 
> > > > > It doesn't exist :-), if we expose a json-based API, third party tool
> > > > > only have to generate the json high-level representation, we would
> > > > > need very few API calls for this, and anyone could generate rulesets
> > > > > for nftables, without relying on the bison parser, given the json
> > > > > representation exposes the abstract syntax tree.
> > > > 
> > > > So you're idea is to accept a whole command in JSON format from
> > > > applications? And output in JSON format as well since that is easier for
> > > > parsing than human readable text we have right now?
> > > 
> > > Just brainstorming here, we're discussing an API for third party
> > > applications. In this case, they just need to build the json
> > > representation for the ruleset they want to add. They could even embed
> > > this into a network message that they can send of the wire.
> > > 
> > > > I'm not sure about the '[ base, offset, length ]' part though:
> > > > Applications would have to care about protocol header layout including
> > > > any specialties themselves, or should libnftables provide them with
> > > > convenience functions to generate the correct JSON markup?
> > > 
> > > It depends, you may want to expose json representations for each
> > > protocol payload you support.
> > > 
> > > > For simple stuff like matching on a TCP port there's probably no
> > > > need, but correctly interpreting IPv4 ToS field is rather
> > > > error-prone I guess.
> > > 
> > > And bitfields are going to be cumbersome too, so we would indeed need
> > > a json representation for each protocol that we support, so third
> > > party applications don't need to deal with this.
> > > 
> > > > The approach seems simple at first, but application input in JSON format
> > > > has to be validated as well, so I fear we'll end up with a second parser
> > > > to avoid the first one.
> > > 
> > > There are libraries like jansson that already do the parsing for us,
> > > so we don't need to maintain our own json parser. We would still need
> > > internal code to libnftables, to navigate the json representation and
> > > create the objects.
> > 
> > Yes sure, there are libraries doing the actual parsing of JSON -
> > probably I wasn't clear enough. My point is what happens if you have a
> > parsed JSON tree (or array, however it may look like in practice). The
> > data sent by the application is either explicit enough for the
> > translation into netlink messages to be really trivial, or it is not
> > (which I prefer, otherwise applications could use libnftnl directly with
> > no drawback) - then we still have to implement a middle layer between
> > data in JSON and nftables objects. Maybe an example will do:
> > 
> > | [{
> > |   "type": "relational",
> > |   "left": {
> > |   "type": "expression",
> > |   "name": "tcp_hdr_expr",
> > |   "value": {
> > |   "type": "tcp_hdr_field",
> > |   "value": "dport",
> > |   },
> > |   },
> > |   "right": {
> > |   "type": "expression",
> > |   "name": "integer_expr",
> > |   "value": 22,
> > |   }
> > | }]
> 
> Probably something more simple representation, like this?
> 
> [{
> "match": {
> "left": {
> "type": "payload",
> "name": "tcp",
> "field: "dport",
>   },
> "right": {
> "type": "immediate",
> "value": 22,
> }
> }
> }]
> 
> For non-matching things, we can add an "action".

You mean for non-EQ type relationals? I would just add a third field
belo

Re: libnftables extended API proposal

2017-12-22 Thread Pablo Neira Ayuso
On Fri, Dec 22, 2017 at 04:30:49PM +0100, Phil Sutter wrote:
> Hi Pablo,
> 
> On Fri, Dec 22, 2017 at 02:49:06PM +0100, Pablo Neira Ayuso wrote:
> > On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> > > On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > > > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > > > [...]
> > > > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > [...]
> > > > > > I wonder if firewalld could generate high level json representation
> > > > > > instead, so it becomes a compiler/translator from its own
> > > > > > representation to nftables abstract syntax tree. As I said, the json
> > > > > > representation is mapping to the abstract syntax tree we have in 
> > > > > > nft.
> > > > > > I'm refering to the high level json representation that doesn't 
> > > > > > exist
> > > > > > yet, not the low level one for libnftnl.
> > > > > 
> > > > > Can you point me to some information about that high level JSON
> > > > > representation? Seems I'm missing something here.
> > > > 
> > > > It doesn't exist :-), if we expose a json-based API, third party tool
> > > > only have to generate the json high-level representation, we would
> > > > need very few API calls for this, and anyone could generate rulesets
> > > > for nftables, without relying on the bison parser, given the json
> > > > representation exposes the abstract syntax tree.
> > > 
> > > So you're idea is to accept a whole command in JSON format from
> > > applications? And output in JSON format as well since that is easier for
> > > parsing than human readable text we have right now?
> > 
> > Just brainstorming here, we're discussing an API for third party
> > applications. In this case, they just need to build the json
> > representation for the ruleset they want to add. They could even embed
> > this into a network message that they can send of the wire.
> > 
> > > I'm not sure about the '[ base, offset, length ]' part though:
> > > Applications would have to care about protocol header layout including
> > > any specialties themselves, or should libnftables provide them with
> > > convenience functions to generate the correct JSON markup?
> > 
> > It depends, you may want to expose json representations for each
> > protocol payload you support.
> > 
> > > For simple stuff like matching on a TCP port there's probably no
> > > need, but correctly interpreting IPv4 ToS field is rather
> > > error-prone I guess.
> > 
> > And bitfields are going to be cumbersome too, so we would indeed need
> > a json representation for each protocol that we support, so third
> > party applications don't need to deal with this.
> > 
> > > The approach seems simple at first, but application input in JSON format
> > > has to be validated as well, so I fear we'll end up with a second parser
> > > to avoid the first one.
> > 
> > There are libraries like jansson that already do the parsing for us,
> > so we don't need to maintain our own json parser. We would still need
> > internal code to libnftables, to navigate the json representation and
> > create the objects.
> 
> Yes sure, there are libraries doing the actual parsing of JSON -
> probably I wasn't clear enough. My point is what happens if you have a
> parsed JSON tree (or array, however it may look like in practice). The
> data sent by the application is either explicit enough for the
> translation into netlink messages to be really trivial, or it is not
> (which I prefer, otherwise applications could use libnftnl directly with
> no drawback) - then we still have to implement a middle layer between
> data in JSON and nftables objects. Maybe an example will do:
> 
> | [{
> | "type": "relational",
> | "left": {
> | "type": "expression",
> | "name": "tcp_hdr_expr",
> | "value": {
> | "type": "tcp_hdr_field",
> | "value": "dport",
> | },
> | },
> | "right": {
> | "type": "expression",
> | "name": "integer_expr",
> | "value": 22,
> | }
> | }]

Probably something more simple representation, like this?

[{
"match": {
"left": {
"type": "payload",
"name": "tcp",
"field: "dport",
},
"right": {
"type": "immediate",
"value": 22,
}
}
}]

For non-matching things, we can add an "action".

I wonder if this can even be made more simple and more compact indeed.

> So this might be how a relational expression could be represented in
> JSON. Note that I intentionally didn't break it down to payload_expr,
> otherwise it had to contain TCP header offset, etc. (In this case that
> might be preferred, but as stated above it's not the best opti

Re: libnftables extended API proposal

2017-12-22 Thread Phil Sutter
Hi Pablo,

On Fri, Dec 22, 2017 at 02:49:06PM +0100, Pablo Neira Ayuso wrote:
> On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> > On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > > [...]
> > > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> [...]
> > > > > I wonder if firewalld could generate high level json representation
> > > > > instead, so it becomes a compiler/translator from its own
> > > > > representation to nftables abstract syntax tree. As I said, the json
> > > > > representation is mapping to the abstract syntax tree we have in nft.
> > > > > I'm refering to the high level json representation that doesn't exist
> > > > > yet, not the low level one for libnftnl.
> > > > 
> > > > Can you point me to some information about that high level JSON
> > > > representation? Seems I'm missing something here.
> > > 
> > > It doesn't exist :-), if we expose a json-based API, third party tool
> > > only have to generate the json high-level representation, we would
> > > need very few API calls for this, and anyone could generate rulesets
> > > for nftables, without relying on the bison parser, given the json
> > > representation exposes the abstract syntax tree.
> > 
> > So you're idea is to accept a whole command in JSON format from
> > applications? And output in JSON format as well since that is easier for
> > parsing than human readable text we have right now?
> 
> Just brainstorming here, we're discussing an API for third party
> applications. In this case, they just need to build the json
> representation for the ruleset they want to add. They could even embed
> this into a network message that they can send of the wire.
> 
> > I'm not sure about the '[ base, offset, length ]' part though:
> > Applications would have to care about protocol header layout including
> > any specialties themselves, or should libnftables provide them with
> > convenience functions to generate the correct JSON markup?
> 
> It depends, you may want to expose json representations for each
> protocol payload you support.
> 
> > For simple stuff like matching on a TCP port there's probably no
> > need, but correctly interpreting IPv4 ToS field is rather
> > error-prone I guess.
> 
> And bitfields are going to be cumbersome too, so we would indeed need
> a json representation for each protocol that we support, so third
> party applications don't need to deal with this.
> 
> > The approach seems simple at first, but application input in JSON format
> > has to be validated as well, so I fear we'll end up with a second parser
> > to avoid the first one.
> 
> There are libraries like jansson that already do the parsing for us,
> so we don't need to maintain our own json parser. We would still need
> internal code to libnftables, to navigate the json representation and
> create the objects.

Yes sure, there are libraries doing the actual parsing of JSON -
probably I wasn't clear enough. My point is what happens if you have a
parsed JSON tree (or array, however it may look like in practice). The
data sent by the application is either explicit enough for the
translation into netlink messages to be really trivial, or it is not
(which I prefer, otherwise applications could use libnftnl directly with
no drawback) - then we still have to implement a middle layer between
data in JSON and nftables objects. Maybe an example will do:

| [{
|   "type": "relational",
|   "left": {
|   "type": "expression",
|   "name": "tcp_hdr_expr",
|   "value": {
|   "type": "tcp_hdr_field",
|   "value": "dport",
|   },
|   },
|   "right": {
|   "type": "expression",
|   "name": "integer_expr",
|   "value": 22,
|   }
| }]

So this might be how a relational expression could be represented in
JSON. Note that I intentionally didn't break it down to payload_expr,
otherwise it had to contain TCP header offset, etc. (In this case that
might be preferred, but as stated above it's not the best option in
every case.)

Parsing^WInterpreting code would then probably look like:

| type = json_object_get(data, "type");
| if (!strcmp(type, "relational")) {
|   left = parse_expr(json_object_get(data, "left"));
|   right = parse_expr(json_object_get(data, "right"));
|   expr = relational_expr_alloc(&internal_location,
|OP_IMPLICIT, left, right);
| }

I think this last part might easily become bigger than parser_bison.y
and scanner.l combined.

> On our side, we would need to maintain a very simple API, basically
> that allows you to parse a json representation and to export it. For
> backward compatibility reasons, we have to keep supporting the json
> layout, instead of a large number of func

Re: libnftables extended API proposal

2017-12-22 Thread Pablo Neira Ayuso
Hi Phil,

On Fri, Dec 22, 2017 at 02:08:16PM +0100, Phil Sutter wrote:
> On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> > On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> > [...]
> > > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
[...]
> > > > I wonder if firewalld could generate high level json representation
> > > > instead, so it becomes a compiler/translator from its own
> > > > representation to nftables abstract syntax tree. As I said, the json
> > > > representation is mapping to the abstract syntax tree we have in nft.
> > > > I'm refering to the high level json representation that doesn't exist
> > > > yet, not the low level one for libnftnl.
> > > 
> > > Can you point me to some information about that high level JSON
> > > representation? Seems I'm missing something here.
> > 
> > It doesn't exist :-), if we expose a json-based API, third party tool
> > only have to generate the json high-level representation, we would
> > need very few API calls for this, and anyone could generate rulesets
> > for nftables, without relying on the bison parser, given the json
> > representation exposes the abstract syntax tree.
> 
> So you're idea is to accept a whole command in JSON format from
> applications? And output in JSON format as well since that is easier for
> parsing than human readable text we have right now?

Just brainstorming here, we're discussing an API for third party
applications. In this case, they just need to build the json
representation for the ruleset they want to add. They could even embed
this into a network message that they can send of the wire.

> I'm not sure about the '[ base, offset, length ]' part though:
> Applications would have to care about protocol header layout including
> any specialties themselves, or should libnftables provide them with
> convenience functions to generate the correct JSON markup?

It depends, you may want to expose json representations for each
protocol payload you support.

> For simple stuff like matching on a TCP port there's probably no
> need, but correctly interpreting IPv4 ToS field is rather
> error-prone I guess.

And bitfields are going to be cumbersome too, so we would indeed need
a json representation for each protocol that we support, so third
party applications don't need to deal with this.

> The approach seems simple at first, but application input in JSON format
> has to be validated as well, so I fear we'll end up with a second parser
> to avoid the first one.

There are libraries like jansson that already do the parsing for us,
so we don't need to maintain our own json parser. We would still need
internal code to libnftables, to navigate the json representation and
create the objects.

On our side, we would need to maintain a very simple API, basically
that allows you to parse a json representation and to export it. For
backward compatibility reasons, we have to keep supporting the json
layout, instead of a large number of functions.

I guess the question here is if this would be good for firewalld, I
didn't have a look at that code, but many third party applications I
have seen are basically creating iptables commands in text, so this
approach would be similar, well, actually better since we would be
providing a well-structured representation.
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal

2017-12-22 Thread Phil Sutter
Hi Pablo,

On Wed, Dec 20, 2017 at 11:23:36PM +0100, Pablo Neira Ayuso wrote:
> On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
> [...]
> > On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > > > On Sun, Dec 10, 2017 at 10:55:40PM +0100, Pablo Neira Ayuso wrote:
> > > > > On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> > > > > > On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > > > > > > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> > > > > > [...]
> > > > > > > > After tweaking the parser a bit, I can use it now to parse just 
> > > > > > > > a
> > > > > > > > set_list_member_expr and use the struct expr it returns. This 
> > > > > > > > made it
> > > > > > > > possible to create the desired struct cmd in above function 
> > > > > > > > without
> > > > > > > > having to invoke the parser there.
> > > > > > > > 
> > > > > > > > Exercising this refining consequently should allow to reach 
> > > > > > > > arbitrary
> > > > > > > > levels of granularity. For instance, one could stop at 
> > > > > > > > statement level,
> > > > > > > > i.e. statements are created using a string representation. Or 
> > > > > > > > one could
> > > > > > > > go down to expression level, and statements are created using 
> > > > > > > > one or two
> > > > > > > > expressions (depending on whether it is relational or not). Of 
> > > > > > > > course
> > > > > > > > this means the library will eventually become as complicated as 
> > > > > > > > the
> > > > > > > > parser itself, not necessarily a good thing.
> > > > > > > 
> > > > > > > Yes, and we'll expose all internal representation details, that we
> > > > > > > will need to maintain forever if we don't want to break backward.
> > > > > > 
> > > > > > Not necessarily. I had this in mind when declaring 'struct 
> > > > > > nft_table'
> > > > > > instead of reusing 'struct table'. :)
> > > > > > 
> > > > > > The parser defines the grammar, the library would just follow it. 
> > > > > > So if
> > > > > > a given internal change complies with the old grammar, it should 
> > > > > > comply
> > > > > > with the library as well. Though this is quite theoretical, of 
> > > > > > course.
> > > > > > 
> > > > > > Let's take relational expressions as simple example: In bison, we 
> > > > > > define
> > > > > > 'expr op rhs_expr'. An equivalent library function could be:
> > > > > > 
> > > > > > | struct nft_expr *nft_relational_new(struct nft_expr *,
> > > > > > | enum rel_ops,
> > > > > > | struct nft_expr *);
> > > > > 
> > > > > Then that means you would like to expose an API that allows you to
> > > > > build the abstract syntax tree.
> > > > 
> > > > That was the idea I had when I thought about how to transition from
> > > > fully text-based simple API to an extended one which allows to work with
> > > > objects instead. We could start simple and refine further if
> > > > required/sensible. At the basic level, adding a new rule could be
> > > > something like:
> > > > 
> > > > | myrule = nft_rule_create("tcp dport 22 accept");
> > > > 
> > > > If required, one could implement rule building based on statements:
> > > > 
> > > > | stmt1 = nft_stmt_create("tcp dport 22");
> > > > | stmt2 = nft_stmt_create("accept");
> > > > | myrule = nft_rule_create();
> > > > | nft_rule_add_stmt(myrule, stmt1);
> > > > | nft_rule_add_stmt(myrule, stmt2);
> > > 
> > > This is mixing parsing and abstract syntax tree creation.
> > > 
> > > If you want to expose the syntax tree, then I would the parsing layer
> > > entirely and expose the syntax tree, which is what the json
> > > representation for the high level library will be doing.
> > 
> > But that means having to provide a creating function for every
> > expression there is, no?
> 
> Yes.
> 
> > > To support new protocol, we will need a new library version too, when
> > > the abstraction to represent a payload is already well-defined, ie.
> > > [ base, offset, length ], which is pretty much everywhere the same,
> > > not only in nftables.
> > 
> > Sorry, I didn't get that. Are you talking about that JSON
> > representation?
> 
> Yes. The one that does not exist.
> 
> > > I wonder if firewalld could generate high level json representation
> > > instead, so it becomes a compiler/translator from its own
> > > representation to nftables abstract syntax tree. As I said, the json
> > > representation is mapping to the abstract syntax tree we have in nft.
> > > I'm refering to the high level json representation that doesn't exist
> > > yet, not the low level one for libnftnl.
> > 
> > Can you point me to some information about that high level JSON
> > representation? Seems I'm missing something here.
> 
> It doesn't exist :-), if we expose a json-based API, third party tool
> only have to generate the json high-level representation, we would
> 

Re: libnftables extended API proposal

2017-12-20 Thread Pablo Neira Ayuso
Hi Phil,

On Wed, Dec 20, 2017 at 01:32:25PM +0100, Phil Sutter wrote:
[...]
> On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> > On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > > On Sun, Dec 10, 2017 at 10:55:40PM +0100, Pablo Neira Ayuso wrote:
> > > > On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> > > > > On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > > > > > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> > > > > [...]
> > > > > > > After tweaking the parser a bit, I can use it now to parse just a
> > > > > > > set_list_member_expr and use the struct expr it returns. This 
> > > > > > > made it
> > > > > > > possible to create the desired struct cmd in above function 
> > > > > > > without
> > > > > > > having to invoke the parser there.
> > > > > > > 
> > > > > > > Exercising this refining consequently should allow to reach 
> > > > > > > arbitrary
> > > > > > > levels of granularity. For instance, one could stop at statement 
> > > > > > > level,
> > > > > > > i.e. statements are created using a string representation. Or one 
> > > > > > > could
> > > > > > > go down to expression level, and statements are created using one 
> > > > > > > or two
> > > > > > > expressions (depending on whether it is relational or not). Of 
> > > > > > > course
> > > > > > > this means the library will eventually become as complicated as 
> > > > > > > the
> > > > > > > parser itself, not necessarily a good thing.
> > > > > > 
> > > > > > Yes, and we'll expose all internal representation details, that we
> > > > > > will need to maintain forever if we don't want to break backward.
> > > > > 
> > > > > Not necessarily. I had this in mind when declaring 'struct nft_table'
> > > > > instead of reusing 'struct table'. :)
> > > > > 
> > > > > The parser defines the grammar, the library would just follow it. So 
> > > > > if
> > > > > a given internal change complies with the old grammar, it should 
> > > > > comply
> > > > > with the library as well. Though this is quite theoretical, of course.
> > > > > 
> > > > > Let's take relational expressions as simple example: In bison, we 
> > > > > define
> > > > > 'expr op rhs_expr'. An equivalent library function could be:
> > > > > 
> > > > > | struct nft_expr *nft_relational_new(struct nft_expr *,
> > > > > |   enum rel_ops,
> > > > > |   struct nft_expr *);
> > > > 
> > > > Then that means you would like to expose an API that allows you to
> > > > build the abstract syntax tree.
> > > 
> > > That was the idea I had when I thought about how to transition from
> > > fully text-based simple API to an extended one which allows to work with
> > > objects instead. We could start simple and refine further if
> > > required/sensible. At the basic level, adding a new rule could be
> > > something like:
> > > 
> > > | myrule = nft_rule_create("tcp dport 22 accept");
> > > 
> > > If required, one could implement rule building based on statements:
> > > 
> > > | stmt1 = nft_stmt_create("tcp dport 22");
> > > | stmt2 = nft_stmt_create("accept");
> > > | myrule = nft_rule_create();
> > > | nft_rule_add_stmt(myrule, stmt1);
> > > | nft_rule_add_stmt(myrule, stmt2);
> > 
> > This is mixing parsing and abstract syntax tree creation.
> > 
> > If you want to expose the syntax tree, then I would the parsing layer
> > entirely and expose the syntax tree, which is what the json
> > representation for the high level library will be doing.
> 
> But that means having to provide a creating function for every
> expression there is, no?

Yes.

> > To support new protocol, we will need a new library version too, when
> > the abstraction to represent a payload is already well-defined, ie.
> > [ base, offset, length ], which is pretty much everywhere the same,
> > not only in nftables.
> 
> Sorry, I didn't get that. Are you talking about that JSON
> representation?

Yes. The one that does not exist.

> > I wonder if firewalld could generate high level json representation
> > instead, so it becomes a compiler/translator from its own
> > representation to nftables abstract syntax tree. As I said, the json
> > representation is mapping to the abstract syntax tree we have in nft.
> > I'm refering to the high level json representation that doesn't exist
> > yet, not the low level one for libnftnl.
> 
> Can you point me to some information about that high level JSON
> representation? Seems I'm missing something here.

It doesn't exist :-), if we expose a json-based API, third party tool
only have to generate the json high-level representation, we would
need very few API calls for this, and anyone could generate rulesets
for nftables, without relying on the bison parser, given the json
representation exposes the abstract syntax tree.
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.o

Re: libnftables extended API proposal

2017-12-20 Thread Phil Sutter
Hi Pablo,

On Tue, Dec 19, 2017 at 12:00:48AM +0100, Pablo Neira Ayuso wrote:
> On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> > On Sun, Dec 10, 2017 at 10:55:40PM +0100, Pablo Neira Ayuso wrote:
> > > On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> > > > On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > > > > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> > > > [...]
> > > > > > After tweaking the parser a bit, I can use it now to parse just a
> > > > > > set_list_member_expr and use the struct expr it returns. This made 
> > > > > > it
> > > > > > possible to create the desired struct cmd in above function without
> > > > > > having to invoke the parser there.
> > > > > > 
> > > > > > Exercising this refining consequently should allow to reach 
> > > > > > arbitrary
> > > > > > levels of granularity. For instance, one could stop at statement 
> > > > > > level,
> > > > > > i.e. statements are created using a string representation. Or one 
> > > > > > could
> > > > > > go down to expression level, and statements are created using one 
> > > > > > or two
> > > > > > expressions (depending on whether it is relational or not). Of 
> > > > > > course
> > > > > > this means the library will eventually become as complicated as the
> > > > > > parser itself, not necessarily a good thing.
> > > > > 
> > > > > Yes, and we'll expose all internal representation details, that we
> > > > > will need to maintain forever if we don't want to break backward.
> > > > 
> > > > Not necessarily. I had this in mind when declaring 'struct nft_table'
> > > > instead of reusing 'struct table'. :)
> > > > 
> > > > The parser defines the grammar, the library would just follow it. So if
> > > > a given internal change complies with the old grammar, it should comply
> > > > with the library as well. Though this is quite theoretical, of course.
> > > > 
> > > > Let's take relational expressions as simple example: In bison, we define
> > > > 'expr op rhs_expr'. An equivalent library function could be:
> > > > 
> > > > | struct nft_expr *nft_relational_new(struct nft_expr *,
> > > > | enum rel_ops,
> > > > | struct nft_expr *);
> > > 
> > > Then that means you would like to expose an API that allows you to
> > > build the abstract syntax tree.
> > 
> > That was the idea I had when I thought about how to transition from
> > fully text-based simple API to an extended one which allows to work with
> > objects instead. We could start simple and refine further if
> > required/sensible. At the basic level, adding a new rule could be
> > something like:
> > 
> > | myrule = nft_rule_create("tcp dport 22 accept");
> > 
> > If required, one could implement rule building based on statements:
> > 
> > | stmt1 = nft_stmt_create("tcp dport 22");
> > | stmt2 = nft_stmt_create("accept");
> > | myrule = nft_rule_create();
> > | nft_rule_add_stmt(myrule, stmt1);
> > | nft_rule_add_stmt(myrule, stmt2);
> 
> This is mixing parsing and abstract syntax tree creation.
> 
> If you want to expose the syntax tree, then I would the parsing layer
> entirely and expose the syntax tree, which is what the json
> representation for the high level library will be doing.

But that means having to provide a creating function for every
expression there is, no?

> To support new protocol, we will need a new library version too, when
> the abstraction to represent a payload is already well-defined, ie.
> [ base, offset, length ], which is pretty much everywhere the same,
> not only in nftables.

Sorry, I didn't get that. Are you talking about that JSON
representation?

> I wonder if firewalld could generate high level json representation
> instead, so it becomes a compiler/translator from its own
> representation to nftables abstract syntax tree. As I said, the json
> representation is mapping to the abstract syntax tree we have in nft.
> I'm refering to the high level json representation that doesn't exist
> yet, not the low level one for libnftnl.

Can you point me to some information about that high level JSON
representation? Seems I'm missing something here.

Thanks, Phil
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal

2017-12-18 Thread Pablo Neira Ayuso
Hi Phil,

On Sat, Dec 16, 2017 at 05:06:51PM +0100, Phil Sutter wrote:
> Hi Pablo,
> 
> On Sun, Dec 10, 2017 at 10:55:40PM +0100, Pablo Neira Ayuso wrote:
> > On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> > > On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > > > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> > > [...]
> > > > > After tweaking the parser a bit, I can use it now to parse just a
> > > > > set_list_member_expr and use the struct expr it returns. This made it
> > > > > possible to create the desired struct cmd in above function without
> > > > > having to invoke the parser there.
> > > > > 
> > > > > Exercising this refining consequently should allow to reach arbitrary
> > > > > levels of granularity. For instance, one could stop at statement 
> > > > > level,
> > > > > i.e. statements are created using a string representation. Or one 
> > > > > could
> > > > > go down to expression level, and statements are created using one or 
> > > > > two
> > > > > expressions (depending on whether it is relational or not). Of course
> > > > > this means the library will eventually become as complicated as the
> > > > > parser itself, not necessarily a good thing.
> > > > 
> > > > Yes, and we'll expose all internal representation details, that we
> > > > will need to maintain forever if we don't want to break backward.
> > > 
> > > Not necessarily. I had this in mind when declaring 'struct nft_table'
> > > instead of reusing 'struct table'. :)
> > > 
> > > The parser defines the grammar, the library would just follow it. So if
> > > a given internal change complies with the old grammar, it should comply
> > > with the library as well. Though this is quite theoretical, of course.
> > > 
> > > Let's take relational expressions as simple example: In bison, we define
> > > 'expr op rhs_expr'. An equivalent library function could be:
> > > 
> > > | struct nft_expr *nft_relational_new(struct nft_expr *,
> > > |   enum rel_ops,
> > > |   struct nft_expr *);
> > 
> > Then that means you would like to expose an API that allows you to
> > build the abstract syntax tree.
> 
> That was the idea I had when I thought about how to transition from
> fully text-based simple API to an extended one which allows to work with
> objects instead. We could start simple and refine further if
> required/sensible. At the basic level, adding a new rule could be
> something like:
> 
> | myrule = nft_rule_create("tcp dport 22 accept");
> 
> If required, one could implement rule building based on statements:
> 
> | stmt1 = nft_stmt_create("tcp dport 22");
> | stmt2 = nft_stmt_create("accept");
> | myrule = nft_rule_create();
> | nft_rule_add_stmt(myrule, stmt1);
> | nft_rule_add_stmt(myrule, stmt2);

This is mixing parsing and abstract syntax tree creation.

If you want to expose the syntax tree, then I would the parsing layer
entirely and expose the syntax tree, which is what the json
representation for the high level library will be doing.

To support new protocol, we will need a new library version too, when
the abstraction to represent a payload is already well-defined, ie.
[ base, offset, length ], which is pretty much everywhere the same,
not only in nftables.

I wonder if firewalld could generate high level json representation
instead, so it becomes a compiler/translator from its own
representation to nftables abstract syntax tree. As I said, the json
representation is mapping to the abstract syntax tree we have in nft.
I'm refering to the high level json representation that doesn't exist
yet, not the low level one for libnftnl.

> [...]
> > > Yes, that sounds good. I had something like this in mind:
> > > 
> > > | struct nft_stmt *nft_meta_match_immediate(enum nft_meta_type, void 
> > > *data);
> > > | int nft_rule_append_stmt(struct nft_rule *r, struct nft_stmt *stmt);
> > > 
> > > The obvious problem is that at the time that meta match is created,
> > > there is no context information. So the second function would have to
> > > do that.
> > > 
> > > I am not sure if this kind of context evaluation works in any case. E.g.
> > > set elements are interpreted depending on the set they are added to. To
> > > my surprise, that wasn't really an issue - the parser interprets it as
> > > constant symbol, when evaluating the expression as part of adding it to
> > > the set it is resolved properly. This might not work in any case,
> > > though.
> > 
> > Not sure I follow, what's the problem with the "missing context"?
> 
> The parser always has the full context available. E.g. if you pass it a
> string such as this:
> 
> | add rule ip t c tcp dport 22 accept
> 
> By the time it parses the individual statements, it already knows e.g.
> the table family and that might make a difference. As far as I can tell
> from my hacking, it is possible to tweak the parser so that one could
> use it to parse just a statement ('t

Re: libnftables extended API proposal

2017-12-16 Thread Phil Sutter
Hi Pablo,

On Sun, Dec 10, 2017 at 10:55:40PM +0100, Pablo Neira Ayuso wrote:
> On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> > On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> > [...]
> > > > After tweaking the parser a bit, I can use it now to parse just a
> > > > set_list_member_expr and use the struct expr it returns. This made it
> > > > possible to create the desired struct cmd in above function without
> > > > having to invoke the parser there.
> > > > 
> > > > Exercising this refining consequently should allow to reach arbitrary
> > > > levels of granularity. For instance, one could stop at statement level,
> > > > i.e. statements are created using a string representation. Or one could
> > > > go down to expression level, and statements are created using one or two
> > > > expressions (depending on whether it is relational or not). Of course
> > > > this means the library will eventually become as complicated as the
> > > > parser itself, not necessarily a good thing.
> > > 
> > > Yes, and we'll expose all internal representation details, that we
> > > will need to maintain forever if we don't want to break backward.
> > 
> > Not necessarily. I had this in mind when declaring 'struct nft_table'
> > instead of reusing 'struct table'. :)
> > 
> > The parser defines the grammar, the library would just follow it. So if
> > a given internal change complies with the old grammar, it should comply
> > with the library as well. Though this is quite theoretical, of course.
> > 
> > Let's take relational expressions as simple example: In bison, we define
> > 'expr op rhs_expr'. An equivalent library function could be:
> > 
> > | struct nft_expr *nft_relational_new(struct nft_expr *,
> > | enum rel_ops,
> > | struct nft_expr *);
> 
> Then that means you would like to expose an API that allows you to
> build the abstract syntax tree.

That was the idea I had when I thought about how to transition from
fully text-based simple API to an extended one which allows to work with
objects instead. We could start simple and refine further if
required/sensible. At the basic level, adding a new rule could be
something like:

| myrule = nft_rule_create("tcp dport 22 accept");

If required, one could implement rule building based on statements:

| stmt1 = nft_stmt_create("tcp dport 22");
| stmt2 = nft_stmt_create("accept");
| myrule = nft_rule_create();
| nft_rule_add_stmt(myrule, stmt1);
| nft_rule_add_stmt(myrule, stmt2);

[...]
> > Yes, that sounds good. I had something like this in mind:
> > 
> > | struct nft_stmt *nft_meta_match_immediate(enum nft_meta_type, void *data);
> > | int nft_rule_append_stmt(struct nft_rule *r, struct nft_stmt *stmt);
> > 
> > The obvious problem is that at the time that meta match is created,
> > there is no context information. So the second function would have to
> > do that.
> > 
> > I am not sure if this kind of context evaluation works in any case. E.g.
> > set elements are interpreted depending on the set they are added to. To
> > my surprise, that wasn't really an issue - the parser interprets it as
> > constant symbol, when evaluating the expression as part of adding it to
> > the set it is resolved properly. This might not work in any case,
> > though.
> 
> Not sure I follow, what's the problem with the "missing context"?

The parser always has the full context available. E.g. if you pass it a
string such as this:

| add rule ip t c tcp dport 22 accept

By the time it parses the individual statements, it already knows e.g.
the table family and that might make a difference. As far as I can tell
from my hacking, it is possible to tweak the parser so that one could
use it to parse just a statement ('tcp dport 22').

> > > A list of use-cases, for the third party application, would be good to
> > > have to design this API.
> > 
> > OK, I'll take firewalld as an example and come up with a number of
> > use-cases which would help there.
> 
> Thanks, those use-cases would be very useful to design this.

OK, we've collected a bit:

Ability to verify ruleset state
---
I want to find out whether a given element (table, chain, rule, set, set
element) previously added to the ruleset still exists.

Possibility to remove previously added ruleset components
-
After making an addition to the ruleset, it should be possible to remove
the added items again. For tables, chains, sets, set elements and
stateful objects, the data used when adding the item is sufficient for
later removal (as removal happens by value). For rules though, the
corresponding handle is required.
XXX: This is inconsistent regarding items removed and added back
again by another instance: As the rule's handle changes, it is not
found anymore afterwards.

Create/delete user chains

Re: libnftables extended API proposal (Was: Re: [nft PATCH] libnftables: Fix for multiple context instances)

2017-12-10 Thread Pablo Neira Ayuso
On Thu, Dec 07, 2017 at 12:34:31PM +0100, Phil Sutter wrote:
> Hi Pablo,
> 
> On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> > On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
> [...]
> > > After tweaking the parser a bit, I can use it now to parse just a
> > > set_list_member_expr and use the struct expr it returns. This made it
> > > possible to create the desired struct cmd in above function without
> > > having to invoke the parser there.
> > > 
> > > Exercising this refining consequently should allow to reach arbitrary
> > > levels of granularity. For instance, one could stop at statement level,
> > > i.e. statements are created using a string representation. Or one could
> > > go down to expression level, and statements are created using one or two
> > > expressions (depending on whether it is relational or not). Of course
> > > this means the library will eventually become as complicated as the
> > > parser itself, not necessarily a good thing.
> > 
> > Yes, and we'll expose all internal representation details, that we
> > will need to maintain forever if we don't want to break backward.
> 
> Not necessarily. I had this in mind when declaring 'struct nft_table'
> instead of reusing 'struct table'. :)
> 
> The parser defines the grammar, the library would just follow it. So if
> a given internal change complies with the old grammar, it should comply
> with the library as well. Though this is quite theoretical, of course.
> 
> Let's take relational expressions as simple example: In bison, we define
> 'expr op rhs_expr'. An equivalent library function could be:
> 
> | struct nft_expr *nft_relational_new(struct nft_expr *,
> |   enum rel_ops,
> |   struct nft_expr *);

Then that means you would like to expose an API that allows you to
build the abstract syntax tree.

> What is allowed in rhs_expr may change internally without breaking ABI
> or the parser-defined language.
> 
> Can you think of a problematic situation? My view is probably a bit
> rose-coloured. ;)
>
> > > On the other hand, having an abstract representation for set elements is
> > > quite convenient - their string representations might differ (take e.g.
> > > "22" vs. "ssh") so strcmp() is not sufficient to compare them.
> > > 
> > > I hope this allows you to get an idea of how I imagine extended API
> > > although certainly details are missing here. What do you think about it?
> > > Are you fine with the general concept so we can discuss details or do
> > > you see a fundamental problem with it?
> > 
> > OK, my understanding is that you would like to operate with some
> > native library object representation.
> > 
> > Most objects (table, chain...) are easy to represent, as you
> > mentioned. Rules are the most complex ones internally, but you can
> > probably abstract a simplified representation that suits well for your
> > usecases, e.g expose them in an iptables like representation -
> > something like adding matches and actions - Obviously, this needs to
> > allow to take sets as input, eg.
> > 
> > int meta_match_immediate(struct nft_rule *r, enum nft_meta_type, 
> > void *data);
> > int meta_match_set(struct nft_rule *r, enum nft_meta_type, struct 
> > nft_set *set);
> > 
> > meta_match_immediate() adds a meta + cmp to the rule, to compare for
> > an immediate value. meta_match_set() adds meta + lookup.
> 
> Yes, that sounds good. I had something like this in mind:
> 
> | struct nft_stmt *nft_meta_match_immediate(enum nft_meta_type, void *data);
> | int nft_rule_append_stmt(struct nft_rule *r, struct nft_stmt *stmt);
> 
> The obvious problem is that at the time that meta match is created,
> there is no context information. So the second function would have to
> do that.
> 
> I am not sure if this kind of context evaluation works in any case. E.g.
> set elements are interpreted depending on the set they are added to. To
> my surprise, that wasn't really an issue - the parser interprets it as
> constant symbol, when evaluating the expression as part of adding it to
> the set it is resolved properly. This might not work in any case,
> though.

Not sure I follow, what's the problem with the "missing context"?

> > A list of use-cases, for the third party application, would be good to
> > have to design this API.
> 
> OK, I'll take firewalld as an example and come up with a number of
> use-cases which would help there.

Thanks, those use-cases would be very useful to design this.
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal (Was: Re: [nft PATCH] libnftables: Fix for multiple context instances)

2017-12-07 Thread Phil Sutter
Hi Pablo,

On Thu, Dec 07, 2017 at 01:05:45AM +0100, Pablo Neira Ayuso wrote:
> On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
[...]
> > After tweaking the parser a bit, I can use it now to parse just a
> > set_list_member_expr and use the struct expr it returns. This made it
> > possible to create the desired struct cmd in above function without
> > having to invoke the parser there.
> > 
> > Exercising this refining consequently should allow to reach arbitrary
> > levels of granularity. For instance, one could stop at statement level,
> > i.e. statements are created using a string representation. Or one could
> > go down to expression level, and statements are created using one or two
> > expressions (depending on whether it is relational or not). Of course
> > this means the library will eventually become as complicated as the
> > parser itself, not necessarily a good thing.
> 
> Yes, and we'll expose all internal representation details, that we
> will need to maintain forever if we don't want to break backward.

Not necessarily. I had this in mind when declaring 'struct nft_table'
instead of reusing 'struct table'. :)

The parser defines the grammar, the library would just follow it. So if
a given internal change complies with the old grammar, it should comply
with the library as well. Though this is quite theoretical, of course.

Let's take relational expressions as simple example: In bison, we define
'expr op rhs_expr'. An equivalent library function could be:

| struct nft_expr *nft_relational_new(struct nft_expr *,
| enum rel_ops,
| struct nft_expr *);

What is allowed in rhs_expr may change internally without breaking ABI
or the parser-defined language.

Can you think of a problematic situation? My view is probably a bit
rose-coloured. ;)

> > On the other hand, having an abstract representation for set elements is
> > quite convenient - their string representations might differ (take e.g.
> > "22" vs. "ssh") so strcmp() is not sufficient to compare them.
> > 
> > I hope this allows you to get an idea of how I imagine extended API
> > although certainly details are missing here. What do you think about it?
> > Are you fine with the general concept so we can discuss details or do
> > you see a fundamental problem with it?
> 
> OK, my understanding is that you would like to operate with some
> native library object representation.
> 
> Most objects (table, chain...) are easy to represent, as you
> mentioned. Rules are the most complex ones internally, but you can
> probably abstract a simplified representation that suits well for your
> usecases, e.g expose them in an iptables like representation -
> something like adding matches and actions - Obviously, this needs to
> allow to take sets as input, eg.
> 
> int meta_match_immediate(struct nft_rule *r, enum nft_meta_type, void 
> *data);
> int meta_match_set(struct nft_rule *r, enum nft_meta_type, struct 
> nft_set *set);
> 
> meta_match_immediate() adds a meta + cmp to the rule, to compare for
> an immediate value. meta_match_set() adds meta + lookup.

Yes, that sounds good. I had something like this in mind:

| struct nft_stmt *nft_meta_match_immediate(enum nft_meta_type, void *data);
| int nft_rule_append_stmt(struct nft_rule *r, struct nft_stmt *stmt);

The obvious problem is that at the time that meta match is created,
there is no context information. So the second function would have to
do that.

I am not sure if this kind of context evaluation works in any case. E.g.
set elements are interpreted depending on the set they are added to. To
my surprise, that wasn't really an issue - the parser interprets it as
constant symbol, when evaluating the expression as part of adding it to
the set it is resolved properly. This might not work in any case,
though.

> A list of use-cases, for the third party application, would be good to
> have to design this API.

OK, I'll take firewalld as an example and come up with a number of
use-cases which would help there.

Thanks, Phil
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libnftables extended API proposal (Was: Re: [nft PATCH] libnftables: Fix for multiple context instances)

2017-12-06 Thread Pablo Neira Ayuso
Hi Phil,

On Tue, Dec 05, 2017 at 02:43:17PM +0100, Phil Sutter wrote:
[...]
> My "vision" for an extended API which actually provides an additional
> benefit is something that allows to work with the entities nft language
> defines in an abstract manner, ideally without having to invoke the
> parser all the time.
>
> Naturally, nftables entities are hierarchical: rules are contained in
> chains, chains contained in tables, etc. At the topmost level, there is
> something I call 'ruleset', which is basically just an instance of
> struct nft_cache. Since we have that in nft context already, it was
> optimized out (for now at least). As a leftover, I have a function which
> does a cache update (although this might be done implicitly as well).
> 
> For each entity contained in the ruleset, I wrote two functions, lookup
> and create, to reference them later. Due to the hierarchical layout,
> both functions take the higher-level entity as an argument. For
> instance:
> 
> | struct nft_table *nft_table_lookup(struct nft_ctx *nft,
> |  unsigned int family,
> |  const char *name);
> | struct nft_chain *nft_chain_new(struct nft_ctx *nft,
> |   struct nft_table *table,
> |   const char *name);
> 
> Family and name are enough to uniquely identify a table. By passing the
> returned object to the second function and a name, a new chain in that
> table can be created - or more precisely, a command (struct cmd
> instance) is created and stored in a new field of struct nft_ctx for
> later, when calling:
> 
> | int nft_ruleset_commit(struct nft_ctx *nft);
> 
> This constructs a new batch job using the previously created commands
> and calls netlink_batch_send().
> 
> The entities I've defined so far are:
> 
> struct nft_table;
> struct nft_chain;
> struct nft_rule;
> struct nft_set;
> struct nft_expr; /* actually this should be setelem */
> 
> The implementation is very incomplete and merely a playground at this
> point. I started with using the parser for everything, then tried to
> eliminate as much as possible. E.g. the first version to add an element
> to a set looked roughly like this (pseudo-code):
> 
> | int nft_set_add_element(struct nft_ctx *nft, struct nft_set *set,
> |   const char *elem)
> | {
> | char buf[1024];
> | 
> | sprintf(buf, "add element ip t %s %s", set->name, elem);
> | scanner_push_buffer(scanner, &indesc_cmdline, buf);
> | nft_parse(nft, scanner, &state);
> | list_splice_tail(&state.cmds, &nft->cmds);
> | }
> 
> After tweaking the parser a bit, I can use it now to parse just a
> set_list_member_expr and use the struct expr it returns. This made it
> possible to create the desired struct cmd in above function without
> having to invoke the parser there.
> 
> Exercising this refining consequently should allow to reach arbitrary
> levels of granularity. For instance, one could stop at statement level,
> i.e. statements are created using a string representation. Or one could
> go down to expression level, and statements are created using one or two
> expressions (depending on whether it is relational or not). Of course
> this means the library will eventually become as complicated as the
> parser itself, not necessarily a good thing.

Yes, and we'll expose all internal representation details, that we
will need to maintain forever if we don't want to break backward.

> On the other hand, having an abstract representation for set elements is
> quite convenient - their string representations might differ (take e.g.
> "22" vs. "ssh") so strcmp() is not sufficient to compare them.
> 
> I hope this allows you to get an idea of how I imagine extended API
> although certainly details are missing here. What do you think about it?
> Are you fine with the general concept so we can discuss details or do
> you see a fundamental problem with it?

OK, my understanding is that you would like to operate with some
native library object representation.

Most objects (table, chain...) are easy to represent, as you
mentioned. Rules are the most complex ones internally, but you can
probably abstract a simplified representation that suits well for your
usecases, e.g expose them in an iptables like representation -
something like adding matches and actions - Obviously, this needs to
allow to take sets as input, eg.

int meta_match_immediate(struct nft_rule *r, enum nft_meta_type, void 
*data);
int meta_match_set(struct nft_rule *r, enum nft_meta_type, struct 
nft_set *set);

meta_match_immediate() adds a meta + cmp to the rule, to compare for
an immediate value. meta_match_set() adds meta + lookup.

A list of use-cases, for the third party application, would be good to
have to design this API.
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http:/

libnftables extended API proposal (Was: Re: [nft PATCH] libnftables: Fix for multiple context instances)

2017-12-05 Thread Phil Sutter
Hi Pablo,

Since I was about to start explaining my extended API idea as part of my
reply, let's take this on-list and I'll give a full overview.

On Mon, Dec 04, 2017 at 07:46:04PM +0100, Pablo Neira Ayuso wrote:
[...]
> Kernel code to check if an element is exists is already upstream, it's
> in current -rc2. And I have a patch here that I'm finishing to add a
> command to do something like:
> 
> # nft get element x y { 1.1.1.1 }
> 
> to see if element 1.1.1.1 is there. You can check for several elements
> in one go. For intervals, you can check both if an element exists in an
> interval and if an interval exists itself.

Looks good!

> Is that what you need? It would be a matter of offering an API for
> this.

It was just an example. The problem with simple API is that for any
task, a command string has to be created and the output parsed again.
Actually, there are good reasons to just call 'nft' instead:

- Debugging is a bit easier since behaviour is fully defined by the
  command string. In simple API, one has to look at settings of nft
  context object as well to determine behaviour.
- For reading output, popen() is a bit simpler to use than fmemopen().
  Also, simple API might lose messages due to bugs in code (missing
  nft_print() conversion) - whatever nft binary prints is caught by
  popen().
- Success/fail state of command is clearly defined by nft exit code, no
  need to check for presence of erec messages or similar "hacks" to find
  out.

> > I have a draft for this extended API and have been working on a PoC to
> > sched a light on the biggest shortcomings. Expect me to start upstream
> > discussion soon. :)
> 
> Great.

My "vision" for an extended API which actually provides an additional
benefit is something that allows to work with the entities nft language
defines in an abstract manner, ideally without having to invoke the
parser all the time.

Naturally, nftables entities are hierarchical: rules are contained in
chains, chains contained in tables, etc. At the topmost level, there is
something I call 'ruleset', which is basically just an instance of
struct nft_cache. Since we have that in nft context already, it was
optimized out (for now at least). As a leftover, I have a function which
does a cache update (although this might be done implicitly as well).

For each entity contained in the ruleset, I wrote two functions, lookup
and create, to reference them later. Due to the hierarchical layout,
both functions take the higher-level entity as an argument. For
instance:

| struct nft_table *nft_table_lookup(struct nft_ctx *nft,
|unsigned int family,
|const char *name);
| struct nft_chain *nft_chain_new(struct nft_ctx *nft,
| struct nft_table *table,
| const char *name);

Family and name are enough to uniquely identify a table. By passing the
returned object to the second function and a name, a new chain in that
table can be created - or more precisely, a command (struct cmd
instance) is created and stored in a new field of struct nft_ctx for
later, when calling:

| int nft_ruleset_commit(struct nft_ctx *nft);

This constructs a new batch job using the previously created commands
and calls netlink_batch_send().

The entities I've defined so far are:

struct nft_table;
struct nft_chain;
struct nft_rule;
struct nft_set;
struct nft_expr; /* actually this should be setelem */

The implementation is very incomplete and merely a playground at this
point. I started with using the parser for everything, then tried to
eliminate as much as possible. E.g. the first version to add an element
to a set looked roughly like this (pseudo-code):

| int nft_set_add_element(struct nft_ctx *nft, struct nft_set *set,
| const char *elem)
| {
|   char buf[1024];
| 
|   sprintf(buf, "add element ip t %s %s", set->name, elem);
|   scanner_push_buffer(scanner, &indesc_cmdline, buf);
|   nft_parse(nft, scanner, &state);
|   list_splice_tail(&state.cmds, &nft->cmds);
| }

After tweaking the parser a bit, I can use it now to parse just a
set_list_member_expr and use the struct expr it returns. This made it
possible to create the desired struct cmd in above function without
having to invoke the parser there.

Exercising this refining consequently should allow to reach arbitrary
levels of granularity. For instance, one could stop at statement level,
i.e. statements are created using a string representation. Or one could
go down to expression level, and statements are created using one or two
expressions (depending on whether it is relational or not). Of course
this means the library will eventually become as complicated as the
parser itself, not necessarily a good thing.

On the other hand, having an abstract representation for set elements is
quite convenient - their string representations might differ (take e.g.
"22" vs. "ssh") so strcmp() is

Re: [PATCH nft 0/1] Proposal: include directories for rulesets

2016-03-04 Thread Puustinen, Ismo
On Fri, 2016-03-04 at 10:57 +0100, Arturo Borrero Gonzalez wrote:
> Hi Ismo,
> 
> I like the idea. What I'm wondering is if it worth having another
> directive like 'includedir' to be more explicit.

Sure, I'm fine with that approach too. If the project leadership
indicates that the include directory approach makes sense, I could do a
patch using the 'includedir' syntax too.

IsmoN�r��yb�X��ǧv�^�)޺{.n�+���z���׫u�ޖ)w*jg����ݢj/���z�ޖ��2�ޙ&�)ߡ�a�����G���h��j:+v���w��٥

Re: [PATCH nft 0/1] Proposal: include directories for rulesets

2016-03-04 Thread Arturo Borrero Gonzalez
On 2 March 2016 at 13:11, Ismo Puustinen  wrote:
> A nice-to-have feature in nft would be the ability to use include
> directories that contain rule files. The use case would be support for
> services dropping their custom configuration files into a directory,
> allowing a more modular firewall configuration.
>
> This is a proof-of-concept patch -- I'm not very familiar with nftables
> code base and conventions.
>

Hi Ismo,

I like the idea. What I'm wondering is if it worth having another
directive like 'includedir' to be more explicit.

-- 
Arturo Borrero González
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH nft 0/1] Proposal: include directories for rulesets

2016-03-02 Thread Ismo Puustinen
A nice-to-have feature in nft would be the ability to use include
directories that contain rule files. The use case would be support for
services dropping their custom configuration files into a directory,
allowing a more modular firewall configuration.

This is a proof-of-concept patch -- I'm not very familiar with nftables
code base and conventions.

Ismo Puustinen (1):
  scanner: add support for include directories

 src/main.c|  4 ++--
 src/scanner.l | 73 +++
 2 files changed, 61 insertions(+), 16 deletions(-)

-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html