I guess the big question is, is SQLite3 really appropriate to use in a
"light weight" but general purpose way?

Is it fast enough for real-world use? Are there bugs or limitations that
prevent it from really working well?

My particular issues with it, given this configuration:

DROP TABLE host_srcif;
CREATE TABLE host_srcif (
        peer_ip_src CHAR(15) NOT NULL DEFAULT '0.0.0.0',
        iface_in INT UNSIGNED NOT NULL DEFAULT 0,
        ip_dst CHAR(15) NOT NULL DEFAULT '0.0.0.0',
        packets INT UNSIGNED NOT NULL,
        bytes BIGINT UNSIGNED NOT NULL,
        stamp_inserted DATETIME NOT NULL,
        stamp_updated DATETIME,
        PRIMARY KEY (peer_ip_src, iface_in, ip_dst, stamp_inserted)
);

! nfacctd configuration file
plugins: sqlite3
!
aggregate: peer_src_ip,in_iface,dst_host
sql_optimize_clauses: true
sql_db: /var/opt/pmacct/g2.db
sql_table: host_srcif
!
sql_refresh_time: 120
sql_history: 2m
sql_history_roundoff: m
sql_dont_try_update: true
nfacctd_time_new: true
!

Produced "columns are not unique" errors at a rate of 31 per second:

ERROR ( default/sqlite3 ): columns peer_ip_src, iface_in, ip_dst,
stamp_inserted are not unique#012

combined with "database is locked" errors at around 16 per second:

ERROR ( default/sqlite3 ): database is locked#012

so I commented out "sql_dont_try_update: true", and the "columns are not
unique" errors are gone, but the "database is locked" errors continue.

BUT, even despite those errors, there are lots of records in the database:

sqlite> select stamp_inserted, count(*) from host_srcif group by
stamp_inserted;
2017-02-09 13:42:00|143277
2017-02-09 13:50:00|135071
2017-02-09 15:50:00|122825
2017-02-09 15:52:00|118091

The 13:42:00 and 13:50:00 periods were when nfacctd produced the "columns
are not unique" error, and the 15:50:00 and 15:52:00 periods were when
nfacctd produced the "database is locked" error. So despite the errors,
records are certainly making it in.

That said, the IO load on the server is massively higher in the 2nd
scenario.

So what do I do? Apparently data can go in despite errors, but data must be
getting lost. How do I get all the data to go in without losing any?

And finally my last question. I have little experience with SQLite, I
understand it's a file-based database. I noticed in the 2nd period
(15:50:00+, above) that if I went into the SQLite CLI to run queries (just
SELECTs, just read-only), that I often could not perform my selects because
it said the database was locked. Is it true that an SQLite database cannot
be read while a write is in progress? That would seem to limit the
usefulness of the system. Is there a config parameter in pmacct/nfacctd
that might make it "better" somehow?

Thanks,

Ed
_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to