Hi Rich
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Travers
> Sent: Mittwoch, 7. Dezember 2016 17:12
> To: Postgres General
> Subject: Re: [GENERAL] When to use COMMENT vs
Looping pgsql-general mail list
I see I am not clear in my question , below are the order of events we see
when we get a invalid page header in block corruption
-Windows server crashed/restarted due to power failure ( we believe) ( I
see that write-cache / write back cache / Disk cache are
Melvin: haha, yeah, it's a download from the Clark County, NV voterfile
website. It's just the format they send out to people who request the file.
I worked this summer doing QA on voterfile builds so I'm familiar with the
data. I thought it would be good stuff to start with.
But thank you for
On Wed, Dec 7, 2016 at 8:25 PM, Adrian Klaver
wrote:
> On 12/07/2016 05:19 PM, metaresolve wrote:
>
>> Uh, yeah, it was a SELECT * from cc_20161207;
>>
>> I know, it was dumb. I didn't realize it would break it or at least run
>> for
>> a while. I tend to do things in
On 12/07/2016 05:19 PM, metaresolve wrote:
Uh, yeah, it was a SELECT * from cc_20161207;
I know, it was dumb. I didn't realize it would break it or at least run for
a while. I tend to do things in small steps, run a query, check my results,
then tweak.
You're right, I wouldn't want to be
Uh, yeah, it was a SELECT * from cc_20161207;
I know, it was dumb. I didn't realize it would break it or at least run for
a while. I tend to do things in small steps, run a query, check my results,
then tweak.
You're right, I wouldn't want to be viewing those million. so I guess I
could just be
On 12/7/2016 4:02 PM, metaresolve wrote:
I used to use Access to do my data crunching, matching, and cleaning at my
old job. I worked with a max of 600k records so Access could handle it. I
know, lame, but it's what I knew.
Access is really 2 completely different things bundled. One is that
On 12/07/2016 04:54 PM, metaresolve wrote:
Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any
On 12/7/2016 4:54 PM, metaresolve wrote:
Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any thoughts?
On 12/07/2016 04:54 PM, metaresolve wrote:
Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any
Choking: I get the "Waiting for the query execution to complete" circling
around for a while. I tried shutting it down and trying again but it's still
freezing on the execution. But if the TB are accurate, I wonder why it's
slowing on this? Any thoughts?
--
View this message in context:
Adrian Klaver writes:
> On 12/07/2016 04:02 PM, metaresolve wrote:
>> How many records and relational tables can pgadmin/postgres actually handle?
> https://www.postgresql.org/about/
> So you have plenty of head room.
Well, pgadmin and postgres are two different
On 12/07/2016 04:02 PM, metaresolve wrote:
That's a little beyond me. Let me back up a sec and maybe you guys can
help.
I used to use Access to do my data crunching, matching, and cleaning at my
old job. I worked with a max of 600k records so Access could handle it. I
know, lame, but it's
That's a little beyond me. Let me back up a sec and maybe you guys can
help.
I used to use Access to do my data crunching, matching, and cleaning at my
old job. I worked with a max of 600k records so Access could handle it. I
know, lame, but it's what I knew.
I was using Alteryx the past 8
On 12/07/2016 03:32 PM, John R Pierce wrote:
On 12/7/2016 2:23 PM, Rob Sargent wrote:
How does your reply change, if at all, if:
- Fields not index
- 5000 hot records per 100K records (millions of records total)
- A dozen machines writing 1 update per 10 seconds (one machine
writing every
On 12/7/2016 3:28 PM, David G. Johnston wrote:
On the second image you are using double-quotes to delimit a string
literal. This is wrong. PostgreSQL always uses single quotes to
indicate literal string value double quotes are reserved for object
identifiers (table names, column names, etc).
Thank you! It was the double quotes. I did run into the permissions error
afterwards but I solved it with a google search.
Thanks,
meta
--
View this message in context:
http://postgresql.nabble.com/Problems-Importing-table-to-pgadmin-tp5933807p5933812.html
Sent from the PostgreSQL - general
On Wed, Dec 7, 2016 at 4:13 PM, metaresolve wrote:
> However, when I look at the table it's got the OID fields in there. From
> what I read, the default is set to off, so I don't understand why they're
> creating them.
>
>
Hi,
pgAdmin 4
Windows 10
I'm brand new and struggling. I was able to create a table with the CREATE
TABLE command and set up the columns. However, when I try to "import"
nothing happens, at all. I import the table and hit Ok and nothing happens.
If I SELECT * from [table] I get no rows back. I'm
"Sinclair, Ian D (Ian)" writes:
> While loading 9.4 in my system I got a warning that oom_adj is deprecated,
> which seems to have come from the postgres logger task. I found emails in the
> archives that this error was fixed in 9.3.3.1 but did the same thing get
> missed
On 12/7/2016 2:23 PM, Rob Sargent wrote:
How does your reply change, if at all, if:
- Fields not index
- 5000 hot records per 100K records (millions of records total)
- A dozen machines writing 1 update per 10 seconds (one machine
writing every 2 mins)
- - each to a different "5000"
or
On 12/07/2016 09:58 AM, John R Pierce wrote:
On 12/7/2016 8:47 AM, Rob Sargent wrote:
Please tell me that in this case, updating 2 (big)integer columns
does not generate dead tuples (i.e. does not involve a insert/delete
pair).
if the fields being updated aren't indexed, and there's free
On Wed, Dec 7, 2016 at 10:40 PM, Maeldron T. wrote:
> Anyway, ICU is turned on for PostgreSQL 9.6 even in the pkg version. Hurray.
Hmm, a curious choice, considering that FreeBSD finally has built-in
collations that work!
Using the port's ICU patch doesn't change anything
While loading 9.4 in my system I got a warning that oom_adj is deprecated,
which seems to have come from the postgres logger task. I found emails in the
archives that this error was fixed in 9.3.3.1 but did the same thing get missed
in the logger code? Is it fixed later? I'm working on an
On 12/07/2016 02:06 PM, Kevin Grittner wrote:
On Wed, Dec 7, 2016 at 7:33 AM, Michael Sheaver wrote:
I would like to echo the sentiment on collation and expand it to
character sets in general. When issues with them come up, they do
take an incredible amount of time and
On Wed, Dec 7, 2016 at 7:33 AM, Michael Sheaver wrote:
> I would like to echo the sentiment on collation and expand it to
> character sets in general. When issues with them come up, they do
> take an incredible amount of time and effort to resolve, and are
> one of my own biggest
Joseph Brenner writes:
> I thought I'd reproduced the behavior in an xterm, but I was just
> trying again and I don't see it. It does seem that the dumbness of my
> dumb terminal is a factor.
Evidently.
> If I understand the way this works, it could be an even more baffling
Yes, I have a tendency to use emacs sub-shells (and occasionally M-x
sql-postgres)--
I thought I'd reproduced the behavior in an xterm, but I was just
trying again and I don't see it. It does seem that the dumbness of my
dumb terminal is a factor.
If I understand the way this works, it could be
On 12/7/2016 8:47 AM, Rob Sargent wrote:
Please tell me that in this case, updating 2 (big)integer columns does
not generate dead tuples (i.e. does not involve a insert/delete pair).
if the fields being updated aren't indexed, and there's free tuple space
that has already been vacuumed in the
2. Accumulation of dead tuples leading to what should be very short
operations taking longer.
No idea of that is helpful but where I would probably start
Please tell me that in this case, updating 2 (big)integer columns does
not generate dead tuples (i.e. does not involve a insert/delete
On Dec 7, 2016 5:07 PM, "Karsten Hilbert" wrote:
>
> On Wed, Dec 07, 2016 at 07:57:54AM -0800, Rich Shepard wrote:
>
> > I have used '-- ' to enter comments about tables or columns and am
curious
> > about the value of storing comments in tables using the COMMENT key
We are trying to export our data from DB2 to postgres, while exporting CLOB
data it is ignoring the new line
character, can you please suggest how we can export and import the CLOB data
in exact format.
--
View this message in context:
On Wed, Dec 7, 2016 at 9:57 AM, Rich Shepard
wrote:
> I have used '-- ' to enter comments about tables or columns and am
> curious
> about the value of storing comments in tables using the COMMENT key word.
> When is the latter more appropriate than the former?
>
>
On Wed, Dec 07, 2016 at 07:57:54AM -0800, Rich Shepard wrote:
> I have used '-- ' to enter comments about tables or columns and am curious
> about the value of storing comments in tables using the COMMENT key word.
> When is the latter more appropriate than the former?
"--" only means
2016-12-07 16:57 GMT+01:00 Rich Shepard :
> I have used '-- ' to enter comments about tables or columns and am
> curious
> about the value of storing comments in tables using the COMMENT key word.
> When is the latter more appropriate than the former?
>
Description
I have used '-- ' to enter comments about tables or columns and am curious
about the value of storing comments in tables using the COMMENT key word.
When is the latter more appropriate than the former?
TIA,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
On Wed, Dec 7, 2016 at 8:33 AM, Michael Sheaver wrote:
> with this for a couple days about a year ago, the workaround I found that
> works is to first import it into a MySQL table, strip out the characters in
> MySQL, dump the data out to a CSV and finally bring the sanitized
I would like to echo the sentiment on collation and expand it to character sets
in general. When issues with them come up, they do take an incredible amount of
time and effort to resolve, and are one of my own biggest pain points when
dealing with databases and datasets from other sources. Case
Hi guys,
I'd like to insert a new standby node without clone the data of the master, how
is it possible?
Explain: Register a new slave node on the cluster and avoid copy 2TB of data by
network, so it comes with the clone data on the disk. After this I think just
increment the new data with
Think I found it. classid 1262 is pg_database and I seem to remember that
NOTIFY takes that lock. I dropped pg_notify from my function and got
immediately >3500 tx/sec.
On Wed, Dec 7, 2016 at 11:31 AM, Torsten Förtsch
wrote:
> On Wed, Dec 7, 2016 at 11:21 AM, Torsten
On Wed, Dec 7, 2016 at 11:31 AM, Torsten Förtsch
wrote:
> On Wed, Dec 7, 2016 at 11:21 AM, Torsten Förtsch
> wrote:
>
>> Hi,
>>
>> I need to tune my database for a high update rate of a single small
>> table. A little simplified it looks like
Tom Lane wrote:
> BTW, I realized while testing this that there's still one gap in our
> understanding of what went wrong for you: cases like "SELECT 'hello'"
> should not have tried to use the pager, because that would've produced
> less than a screenful of data
At some point emacs was
On Wed, Dec 7, 2016 at 11:21 AM, Torsten Förtsch
wrote:
> Hi,
>
> I need to tune my database for a high update rate of a single small table.
> A little simplified it looks like this:
>
> CREATE TABLE temp_agg(
> topic TEXT PRIMARY KEY,
> tstmp TIMESTAMP,
> cnt
Hi,
I need to tune my database for a high update rate of a single small table.
A little simplified it looks like this:
CREATE TABLE temp_agg(
topic TEXT PRIMARY KEY,
tstmp TIMESTAMP,
cnt BIGINT,
sum NUMERIC
)
The table has 500 rows.
A transaction looks simplified like this:
1) select
Hi,
I tried both ways: they're ok.
Also, multiple VALUES in one INSERT is actually better as performance.
Thanks again
Pupillo
2016-12-06 19:49 GMT+01:00 Tom Lane :
> [ please keep the list cc'd ]
>
> Tom DalPozzo writes:
> > To be honest, I didn't
Hello Thomas,
> > (Maybe database clusters should have a header that wouldn’t allow
> > incompatible server versions to process the existing data. I wonder if it
> > would take more than 8 bytes per server. But I guess it was not know to
> be
> > incompatible. Even my two CIs didn’t show it.)
>
46 matches
Mail list logo