Certainly Postgres is capable of handling this volume just fine. Throw in
some partition rotation handling and you have a solution.
If you want to play with something different, check out Graylog, which is
backed by Elasticsearch. A bit more work to set up than a single Postgres
table, but it has b
On 8/22/2016 2:39 AM, Thomas Güttler wrote:
Am 19.08.2016 um 19:59 schrieb Andy Colson:
On 8/19/2016 2:32 AM, Thomas Güttler wrote:
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
On Mon, 22 Aug 2016, 3:40 p.m. Thomas Güttler,
wrote:
>
>
> Am 19.08.2016 um 19:59 schrieb Andy Colson:
> > On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> >> I want to store logs in a simple table.
> >>
> >> Here my columns:
> >>
> >> Primary-key (auto generated)
> >> timestamp
> >> host
> >
Am 19.08.2016 um 19:59 schrieb Andy Colson:
On 8/19/2016 2:32 AM, Thomas Güttler wrote:
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
I am unsure which DB to choose: Postgres, E
Thank you Chris for looking at my issue in such detail.
Yes, the parallel feature rocks.
Regards,
Thomas Güttler
Am 19.08.2016 um 22:40 schrieb Chris Mair:
On 19/08/16 10:57, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep fo
On Sat, 20 Aug 2016, 2:00 a.m. Andy Colson, wrote:
> On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> > I want to store logs in a simple table.
> >
> > Here my columns:
> >
> > Primary-key (auto generated)
> > timestamp
> > host
> > service-on-host
> > loglevel
> > msg
> > json (option
On 19/08/16 10:57, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for adhoc
querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're
talking big tables.
in fact thats several rows/second on a 24/7
On 8/19/2016 2:32 AM, Thomas Güttler wrote:
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
I am unsure which DB to choose: Postgres, ElasticSearch or ...?
We don't have high traffi
On Fri, Aug 19, 2016 at 2:32 AM, Thomas Güttler
wrote:
> I want to store logs in a simple table.
>
> Here my columns:
>
> Primary-key (auto generated)
> timestamp
> host
> service-on-host
> loglevel
> msg
> json (optional)
>
> I am unsure which DB to choose: Postgres, ElasticSearch o
On 8/19/2016 3:44 AM, Andreas Kretschmer wrote:
So, in your case, consider partitioning, maybe per month. So you can
also avoid mess with table and index bloat.
with his 6 week retention, i'd partition by week.
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general maili
Am 19.08.2016 um 12:44 schrieb Andreas Kretschmer:
Thomas Güttler wrote:
How will you be using the logs? What kind of queries? What kind of searches?
Correlating events and logs from various sources could be really easy with
joins, count and summary operations.
Wishes raise with possibili
On Fri, Aug 19, 2016 at 12:44 PM, Andreas Kretschmer
wrote:
> for append-only tables like this consider 9.5 and BRIN-Indexes for
> timestamp-searches. But if you deletes after N weeks BRIN shouldn't work
> properly because of vacuum and re-use of space within the table.
> Do you know BRIN?
>
> So,
Thomas Güttler wrote:
>> How will you be using the logs? What kind of queries? What kind of searches?
>> Correlating events and logs from various sources could be really easy with
>> joins, count and summary operations.
>
> Wishes raise with possibilities. First I want to do simple queries about
W dniu 19.08.2016 o 10:57, Thomas Güttler pisze:
>
>
> Am 19.08.2016 um 09:42 schrieb John R Pierce:
[-]
>> in fact thats several rows/second on a 24/7 basis
>
> There is no need to store them more then 6 weeks in my current use case.
>
> I think indexing in postgres is much faste
Am 19.08.2016 um 11:21 schrieb Sameer Kumar:
On Fri, Aug 19, 2016 at 4:58 PM Thomas Güttler mailto:guettl...@thomas-guettler.de>> wrote:
Am 19.08.2016 um 09:42 schrieb John R Pierce:
> On 8/19/2016 12:32 AM, Thomas Güttler wrote:
>> What do you think?
>
> I store most o
On Fri, Aug 19, 2016 at 4:58 PM Thomas Güttler
wrote:
>
>
> Am 19.08.2016 um 09:42 schrieb John R Pierce:
> > On 8/19/2016 12:32 AM, Thomas Güttler wrote:
> >> What do you think?
> >
> > I store most of my logs in flat textfiles syslog style, and use grep for
> adhoc querying.
> >
> > 200K rows/
Am 19.08.2016 um 09:42 schrieb John R Pierce:
On 8/19/2016 12:32 AM, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for adhoc
querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're
talking big ta
On 8/19/2016 12:32 AM, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for
adhoc querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon
you're talking big tables.
in fact thats several rows/second on a 2
I want to store logs in a simple table.
Here my columns:
Primary-key (auto generated)
timestamp
host
service-on-host
loglevel
msg
json (optional)
I am unsure which DB to choose: Postgres, ElasticSearch or ...?
We don't have high traffic. About 200k rows per day.
My heart beats f
19 matches
Mail list logo