On Oct 5, 10:16 am, François Beausoleil <[email protected]> wrote: > Hi! > > I have a Sinatra application using Sequel on PostgreSQL. STDOUT is filled > with statements like these: > > I, [2011-10-05T17:09:59.545735 #7848] INFO -- : (0.001433s) SELECT > "pg_attribute"."attname" AS "name", format_type("pg_type"."oid", > "pg_attribute"."atttypmod") AS "db_type", pg_get_expr("pg_attrdef"."adbin", > "pg_class"."oid") AS "default", NOT "pg_attribute"."attnotnull" AS > "allow_null", COALESCE(("pg_attribute"."attnum" = ANY("pg_index"."indkey")), > false) AS "primary_key" FROM "pg_class" INNER JOIN "pg_attribute" ON > ("pg_attribute"."attrelid" = "pg_class"."oid") INNER JOIN "pg_type" ON > ("pg_type"."oid" = "pg_attribute"."atttypid") INNER JOIN "pg_namespace" ON > ("pg_namespace"."oid" = "pg_class"."relnamespace") LEFT OUTER JOIN > "pg_attrdef" ON (("pg_attrdef"."adrelid" = "pg_class"."oid") AND > ("pg_attrdef"."adnum" = "pg_attribute"."attnum")) LEFT OUTER JOIN "pg_index" > ON (("pg_index"."indrelid" = "pg_class"."oid") AND ("pg_index"."indisprimary" > IS TRUE)) WHERE (("pg_attribute"."attisdropped" IS FALSE) AND > ("pg_attribute"."attnum" > 0) AND ("pg_class"."relname" = 'show_bindings') > AND ("pg_namespace"."nspname" !~* 'pg_*|information_schema')) ORDER BY > "pg_attribute"."attnum" > I, [2011-10-05T17:09:59.550914 #7848] INFO -- : (0.002809s) SELECT > bucketize('2011-10-05 13:09:59.505103+0000', '2011-10-05 > 17:09:59.505103+0000', 24, date_trunc('minute', "created_at")) AS "bucket", > COUNT(*) AS "interactions_count" FROM "twitter_interactions" INNER JOIN > "show_bindings" USING ("interaction_id") WHERE (("show_id" = > '71681dd2-daec-11e0-9b8e-40402761cfca') AND (created_at >= '2011-10-05 > 13:09:59.505103+0000' AND created_at < '2011-10-05 17:09:59.505103+0000')) > GROUP BY 1 ORDER BY 1 > > It's very hard to pick apart my statements from the adapter's. The first > statement above is something the DB adapter made to identify columns / tables > / etc. The 2nd is the one I'm really interested in: the rest are noise as far > as I'm concerned. I can't just increase the filtering level, because my > statements will be skipped as well. What I need / want / can work on is > having the adapter SQL run as debug, my statements as info, and warn if the > duration was over the specified limit. > > I took a very quick look (5 minutes tops!) and found where the logging > actually happens. What I'd need to do now is hunt down all the adapter SQL > statements and make them call log_debug rather than log_info. > > Jeremy, would you be OK with a proper patch that did something like this?
If you don't want to log the schema parsing information, the easier solution is not to add a logger to the Database until after all your model classes have been loaded. Logging in Sequel happens at a very low level (directly before SQL execution), and you generally don't have the necessary context to decide whether or not you want to log something. Jeremy -- You received this message because you are subscribed to the Google Groups "sequel-talk" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/sequel-talk?hl=en.
