, especially when the underlying data is
changing faster than your reporting process can generate the report.
--
Tim Cross
ve the snap and then use apt to get the normal deb package
install. the snap package system does not use the normal locations for
config files.
This could also be a ubuntu 18.04 issue. This version was only released a
couple of weeks ago and it is the first version which has Postgres 10 as
the default. I would try the following
1. use systemctl to stop postgresql service
2. remove all postgres packages making sure all config files are also
removed
3. Use synaptic to make sure all postgres package and associated config
files have been removed.
4. Reboot
5. run apt update and then apt upgrade
6 re-install using apt (not the software centre).
--
regards,
Tim
--
Tim Cross
ed side effects. If a function
cannot be viewed in a single screen, it is probably too big and trying
to do too many different things which should be broken up into smaller
functions.
regards,
Tim
--
Tim Cross
CSV files, so you may be better off writing a
small 'wrapper' app which uses the same drivers and assuming your
database connectivity has been abstracted into some sort of
module/library/class, use the same interface to write to the database
that the application uses.
Tim
--
Tim Cross
issue.
Definitely check AD logs as well - the issue could be simply that adding
a new system has increased demand sufficiently to degrade performance of
AD (though I would expect there would be complaints from others outside
the DB area if this was the case).
The GSSAPI approach is not as complicated as it sounds, but it can be
affected by environment/infrastructure architecture and it will be
critical to ensure you have good time synchronisation. This can be
somewhat challenging in hybrid environments where you have a mix of
local and remote services. When it all works, it is great, but when you
do have a problem, diagnosis can be challenging.
The overall approach of having one identity with one password per entity
is IMO the right approach and your only hope for good password policy
application. However, getting to that point can be very challenging.
--
Tim Cross
Moreno Andreo writes:
> Hi Tim,
>
> Il 29/05/2018 00:06, Tim Cross ha scritto:
>> Moreno Andreo writes:
>>
>>> Hi folks,
>>> I'm trying to install Postgresql 9.6 on a test machine in Google Cloud
>>> Platform
>>> After a fres
keys to verify the
authenticity of packages it downloads. My guess is that previously, you
only needed to ensure the package had a dependency on apt-key and now
apt has/is changing such that you need to have an explicit dependency on
either gnupg or gnupg2.
--
Tim Cross
I'll just take a leaf out of 'the Guide', grab my towel and not
panic!
Tim
--
Tim Cross
h needs to be
addressed.
I'm not aware of any guideline or school of thought which rules out this
as an option. Possibly the reason it appears to be used infrequently is
because it doesn't realise the benefits you might expect or is simply
not an problem in a majority of use cases.
Tim
--
Tim Cross
et the right
combination of data format and copy definition.
It may take some effort to get the right combination, but the result is
probably worth it given your data set size i.e. difference between hours
and days.
--
Tim Cross
errors or providing more specific detail regarding the cause of the
error.
Be wary of what indexes your defining on your table. Depending on the
type and number, these can have significant impact on insert times as
well.
--
Tim Cross
On Mon, 25 Jun 2018 at 11:38, Anto Aravinth
wrote:
>
>
> On Mon, Jun 25, 2018 at 3:44 AM, Tim Cross wrote:
>
>>
>> Anto Aravinth writes:
>>
>> > Thanks for the response. I'm not sure, how long does this tool takes for
>> > the 70GB dat
f semester periods, but often cycle between two
addresses, their college and their parental home). The downside of
this approach is that applications which insert this information must
remember to execute both SQL statements. If you have multiple
interfaces, this might become a maintenance burden (one of the
advantages of using a DB function).
Tim
--
Tim Cross
s are superfluous.
Have you tried doing the same thing where the fk keys and remote unique
index keys are equal in number?
--
Tim Cross
of my technical work over the last 10 years has been with Oracle. I
prefer to use scripts over GUI tools like pgAdmin and suspect that there
is probably some good resources out there with existing scripts I can
use as a starting point.
Any pointers greatly appreciated.
thanks,
Tim
--
Tim Cross
bad. Establishing some standards and change control will help.
thanks again,
tim
On 16 February 2018 at 14:11, Melvin Davidson <melvin6...@gmail.com> wrote:
>
>
> On Thu, Feb 15, 2018 at 9:22 PM, Tim Cross <theophil...@gmail.com> wrote:
>
>> Hi All,
>>
&
George Neuner <gneun...@comcast.net> writes:
> On Sat, 17 Feb 2018 07:23:32 +1100, Tim Cross <theophil...@gmail.com>
> wrote:
>
>>Thomas Kellerer <spam_ea...@gmx.net> writes:
>>
>>> Plus: trimming the original content, so that not the whole email
ce for postgres and instead of using the old /etc/iinit.d
scripts, you now do something like systemctl start postgres-10.2.service
Tim
--
Tim Cross
to verify the location the symbolic link in /usr/bin is pointing to is
correct.
Tim
--
Tim Cross
Rich Shepard <rshep...@appl-ecosys.com> writes:
> On Sun, 18 Feb 2018, Tim Cross wrote:
>
>>> # ll /usr/bin/postgres
>>> lrwxrwxrwx 1 root root 35 Feb 17 09:30 /usr/bin/postgres ->
>>> ../lib/postgresql/10.2/bin/postgres*
>
>> Try doing an 'll
Rich Shepard <rshep...@appl-ecosys.com> writes:
> On Mon, 19 Feb 2018, Tim Cross wrote:
>
>> It is possible for the target of a symbolic link to be changed, deleted
>> etc (a dangling sym link).
>
> Tim,
>
>Broken symlinks display in a different col
Rich Shepard <rshep...@appl-ecosys.com> writes:
> On Sun, 18 Feb 2018, Tim Cross wrote:
>
>> This may not be relevant,
>
> Tim,
>
>Nope. Pat goes for stability, not cutting edge. No systemd in the
> forthcoming 15.0, either.
>
> Thanks,
>
> Rich
Ibrahim Edib Kokdemir <kokde...@gmail.com> writes:
> Hi Tim,
> There are good continuously running apps to monitor postgres.
> IMHO, the most successful one is pgcenter.
> Here is the link. https://github.com/lesovsky/pgcenter
>
Thanks, a useful link.
Tim
--
Tim Cross
s
more 'old school' (mu4e), where dealing with such preferences is much
easier, but our wise network admins ban access internally (but you can
still access the web interface...).
Top v bottom posting issues are a blast from the past - I've not even
seen a reference to it since the old Newsgroups days! ;-)
Tim
--
Tim Cross
ion and configuration, maintenance etc has been patchy.
Tim
--
Tim Cross
l the content in the message. This
reduces confusion, misunderstanding and incorrect attribution arising
from poor editing and makes it easy to reference the full discussion in
one spot rather than having to go back through messages putting it all
back together. YMMV.
Tim
--
Tim Cross
o important that all staff are aware of the
organisations policies, procedures and controls regarding data
access. They need to know what is expected of them and what is
unacceptable.
--
Tim Cross
manually or are such references abstracted such that
> the column name "text" is irrelevant tot he actual structure of the
> index?).
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
--
regards,
Tim
--
Tim Cross
On Mon, 13 Aug 2018 at 12:23, Olivier Gautherot
wrote:
> On Sun, Aug 12, 2018 at 11:06 PM, Tim Cross wrote:
>
>>
>> On Mon, 13 Aug 2018 at 11:24, Adrian Klaver
>> wrote:
>>
>>> On 08/12/2018 05:41 PM, Samuel Williams wrote:
>>> > I wish the d
cedures which present a 'mapped' view back to the framework layer
which hides the SQL from the framework. Works well, with the only main
downside being you now have SQL in a different (another) place, which
can make some people uncomfortable and can be a maintenance issue if all
your developers are just front-end devs who treat a database as just a
key/value repository. .
Tim
--
Tim Cross
t with shared
passwords. Apart from the security implications, you will almost
certainly run into problems with auditors and many regulatory
standards.
--
Tim Cross
g business requirements in a timely manner. This is often the
most frustrating part - you can be an excellent technical person able to
define and implement really good technical solutions, but if the
customer is unable to use the solution effectively, it will be seen as a
technical failure.
Tim
--
Tim Cross
lect t1.cola t1.colb, t1.colc, t2.y2012 from . Where t2.y2012 != 0;
>
>
> to generalize
>
> select * from FUNCTION( year_column )
>
> select t1.cola t1.colb, t1.colc, t2.year_column from . Where
> t2.year_column != 0;
>
> is it possible? if so how?
>
>
>
--
regards,
Tim
--
Tim Cross
.
>> Also, it is open source?
>
> No, it is freeware
oh well, too bad.
--
Tim Cross
pecially in a library, it implies
> the user has their own computer. As libraries allow users/citizens to
> request books be purchased >at no cost to the user/citizen, the
> argument that someone cannot afford a book is now a moot point.
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>
--
regards,
Tim
--
Tim Cross
Peter J. Holzer writes:
> On 2018-07-18 08:09:35 +1000, Tim Cross wrote:
>> If using web widgets to author content on the wiki is the main
>> impediment for contributing content, maybe we should see if the wiki
>> provides alternative access methods. I've used wi
is misleading or confusing documentation.
My only real concern would be to further fracture the PG user base. If
there are barriers preventing users from adding documentation to the
existing documents or wiki, perhaps it would be better to try and
address those first?
Tim
--
Tim Cross
Dmitry Igrishin writes:
> пн, 16 июл. 2018 г. в 1:14, Tim Cross :
>
>>
>> Your idea to make it integrate with user's preferred editor is a good
>> idea as editors are like opinions and certain anatomical parts -
>> everyone has one! Finding an appropriate API to
made Java as welcome as
it use to be. If you do choose Java, it will need to work under openJDK
as this is what most Linux users will have installed.
Tim
--
Tim Cross
ata model and then start development. Have a look at
https://en.wikipedia.org/wiki/Database_normalization for some background
on the normal forms and why they are useful.
HTH
Tim
--
Tim Cross
so far which would indicate a necessity to have
more columns as you increase the number of agents. It would be normal to
have something like
| agent_id | year | cash_on_hand | bank | creditors | debtors |
and queries like
select sum(cash_on_hand)
from table
where agent_id = 'agent1'
and yesr = 2018;
to get the sum of cash on hand for agent1 in 2018.
instead of something like
| agent1_cash2018 | agent2_cash2017 |
which will not work well.
Tim
--
regards,
Tim
--
Tim Cross
retty straight forward to drop the indexes and recreate them afterwards
via sql, so we didn't look for a tool as such.
As data is only inserted into this table and only by this process, we
also turned off autovacuum for this table, performing vacuum and analyze
manually after load.
Tim
--
Tim Cross
nd cheaper)
staff. Reality is, you probably want your more skilled and experienced
staff dealing with deployments and they will typically prefer the
flexibility of scripts over the constraints of a GUI.
Given the high level of variability in environments, you are probably
best off developing the process and scripts rather than trying to find
an existing tool. Putting a web front end is likely easier than finding
a tool flexible enough to fit with the environment which avoids
situations where the tool begins to dictate how you operate (tail
wagging the dog).
Tim
--
Tim Cross
obably should be dealt with via management channels rather
than technical ones. Besides, the likely outcome will be your developers
will just adopt the practice of adding a serial column to every table,
which in itself doesn't really add any value.
Tim
--
Tim Cross
target class
> org.postgresql.ds.PGSimpleDataSource java.langNumberFormatException:for
> input string: "5432,10.16.10.13:5432 <http://10.16.10.13:5432>"*
>
> It looks like the first IP address has disappeared or is not set right.
>
> Thanks in advance for the help.
>
>
>
>
--
regards,
Tim
--
Tim Cross
er_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overridden with a
local ~/.postgresqlrc, so check there are no old settings there as well.
Tim
--
Tim Cross
nce between business
responsiveness to change and long term maintenance/viability.
Unfortunately, such developers are rare, so it will usually mean there are
a team of people with different skills and what will matter is how well
they are able to work together as a team and come up with an architecture
which satisfies the business requirements.
--
regards,
Tim
--
Tim Cross
y your dominate platform in the
market your application targets.
--
Tim Cross
disk from the SAN and then access the data.
Then of course there is the bureaucratic protection - "Yes boss, all our
data is encrypted on disk."
Tim
"
--
Tim Cross
licy,
just use full disk encryption and state that all data is encrypted on
disk" and your done.
Tim
--
Tim Cross
Andres Freund <and...@anarazel.de> writes:
> Hi,
>
> On 2018-04-23 08:30:25 +1000, Tim Cross wrote:
>> the recent article in LWN regarding issues with fsync and error
>> reporting in the Linux kernel and the potential for lost data has
>> prompted me to ask 2
stemd while 14.04 used upstart?
After a Ubuntu upgrade, there is usually an upgrade log you can also
check to see if anything failed or provided warnings regarding
additional actions required.
HTH
Tim
--
Tim Cross
concatenated into the string?
thanks,
Tim
--
Tim Cross
ght any bugs. Because, of course, unit tests are
> only as good as you imagination in devising tests.
+1. And a good test of your underlying data model is whether you can
identify a natural primary key. If you can't, chances are your model is
immature/flawed and needs more analysis.
--
Tim Cross
best suited given whatever
infrastructure they have.
Tim
--
Tim Cross
don't need to do a full "insert into blah ()
values (...)" for each insert.
2. If it really is an insert you want to do and you already have the
data in a file e.g. CSV or similar, then you can use the \copy command
to process the file, which is very fast.
3. Is it really insert or update you need?
--
Tim Cross
what I
need. As usual, it is more about requirements than brand and choosing
the right tool for the right job.
Tim
--
Tim Cross
;> postgres compiled as a linux binary is running on MS's new-ish linux
>> emulation.
>
> Whee ... so you get to cope with all the bugs/idiosyncrasies of three
> operating system layers, not just one.
That comment has made my day - thanks Tom!
--
Tim Cross
h Linux, though these
days, the Windows environment is often too locked down to allow
this. I've not yet experimented with the virtual linux layer in w10.
--
Tim Cross
values. Things can quickly become complicated as you can
have locale information at both the server and client end and they may
not be the same.
As you should always be sanitising your data before inserting into the
database anyway, you may as well just add this as another check at the
client end.
Tim
--
Tim Cross
etc). These are well known problems in distributed or replication
systems.
You appear to be implementing a 'poor mans' replication system. There
are lots of complex issues to deal with and I wonder why you want to
take them on when PG has already got well tested and robust solutions
for this that would simplify your architecture and avoid the need to
re-implement functionality which already exists?
regards,
Tim
--
Tim Cross
with inappropriate use of that schema. It depends
heavily on how the database is used. A database used for a single
application has a completely different security and risk profile from a
database used by multiple users for different applications. Arbitrary
rules such as 'you won't use PUBLIC' are almost always wrong and often
just make both developer and dba lives more complicated and harder to
maintain. Complexity is where things go wrong and where security tends
to break down.
Rather than requiring the developer to use a specific schema, I would
1. Ask them why they believe they have to use the PUBLIC schema
2. If the need to use the PUBLIC schema is confirmed, then work with the
developer to understand what the access requirements are and develop an
appropriate model.
3. If there is no dependency on using the PUBLIC schema, work with the
developer to assist them to resolve there access issues.
Depending on the size of the organisation and complexity of the
environment, choice of libraries and modules is not always as
straight-forward. It may not be easy to switch to another library/module
with better support for schemas etc or even to upgrade to a new
version. Often, such changes will need to be managed in stages and over
time. Work with the developers as complex environments will frequently
require a united voice in order to get changes approved or prioritised.
Tim
--
Tim Cross
get
to a usable and stable state eventually and will likely be a pretty good
replacement for pgAdmin3. However, currently, I find it still a little
too unstable.
Personally, I'm pleased I spent the time to get my Emacs and psql
integration working to the point that I do 90% of what I need in psql
--
Tim Cross
work with, but that is a
different issue.
There are some bloody awful Java applications out there, but this really
means, assess on a per app basis, not a blanket ban on all of
them. There are insecure and poorly written apps in every language.
Tim
--
Tim Cross
s are granted, then you
should be able to revoke them effectively.
Tim
--
regards,
Tim
--
Tim Cross
tore Blobs in PostgresSQL.
>
> Where do you store Blobs?
>
> (In my case Blobs are PDF/image files with size up to 20 MByte.
> I do not talk about very big blobs which are several hundret MByte)
--
Tim Cross
t is, you may
find adding a column which is a checksum of your 'document' field a
useful addition. I have done this in the past where I had an application
where name was not unique and we only wanted distinct instances of
'document' (document was a fairly large XML document in this case).
--
Tim Cross
nux more with bash
and linux subsystem, availability of linux VMs in Azure etc.
For your larger databases, I see more Linux than MS. This could be
related to storage and file systems more than anything else.
--
Tim Cross
it would do nothing i.e. no completion
candidates found, telling me there is no match based on the prefix I've
typed.
--
regards,
Tim
--
Tim Cross
On Tue, 18 Jun 2019 at 10:39, Tom Lane wrote:
> Tim Cross writes:
> > On Tue, 18 Jun 2019 at 09:34, Ken Tanzer wrote:
> >> Thanks Adrian, though I wasn't really seeking tips for column names. I
> >> was instead trying to understand whether this particular tab expan
it away.
>
> Any insight is helpful. My biggest fear is that for whatever reason we will
> not be able to reload this table during any particular crisis in the future
> should it come to that.
>
> Thanks.
Which version of postgres?
How are you loading the data? (application, psql, pg_restore) using
(insert, copy)?
--
Tim Cross
e for PG version ?
>
> Also, since the server will be a VM, are there any special
> recommendations/suggestions might I forward in the request (install
> options, tuning options, other) ?
>
> Thanks !
--
Tim Cross
impossible position and have no
hope of implementing anything that will be maintainable and you will
never be able to manage security.
I'm hoping you mean different agencies which need to add/modify rows
wihtin the tables?
--
Tim Cross
associated with a single country, you can potentially have multiple
conversion routines required. On most *nix systems, clock time is UTC as
well, so having everything in UTC really helps when you want to do
diagnosis across your database and system log files etc.
--
Tim Cross
n data remotely in the database. I have seen situations with a
very specialised application where having a more full featured LOCAL (to
the client) database server to record session information can be useful,
but this is rare.
--
Tim Cross
are up-to-date wrt latest
minor release for that version and would use clients with the same
version as the master.
--
Tim Cross
ut are in fact developing at the
node.js level, then you don't need webpack.
I think what you really need to do is step back and look closer at your
architecture. Typically, you would put all your database interaction
stuff in the web server using node.js. A common design pattern would be
to use one of the node.js web servers, like express (but there are
others) and have something like nginx as a proxy server in front of it.
You would then wrap your database interaction in a node.js API which you
would then call from your client browser using http/https or web
sockets.
--
Tim Cross
abled and check for things like
overly frequent writing of WALs. This can have significant impact on
performance. If your rows are large, you may be adversely impacting
performance writing the WAL cache etc.
--
Tim Cross
l be used and what the expectations of the users are.
Maybe there is a legitimate business case to allow partial data entry,
in which case, you may need a different approach or a way to identify
partial/incomplete records etc.
--
Tim Cross
.e. /etc/apt/sources.list or
/etc/apt/sources.list.d/postgres.list (or whatever you have called
itIf). Try adding the arch option as
deb [ arch=amd64 ] http://.
deb-src [arch=amd64 ] ...
This should tell apt to only look for the amd64 packages.
--
Tim Cross
ces to establish, it does tend to result in
reduced maintenance costs in the longer term.
--
Tim Cross
Geoff Winkless writes:
> On Wed, 6 May 2020 at 00:05, Tim Cross wrote:
>> Where Tom's solution fails is with smaller companies that cannot afford
>> this level of infrastructure.
>
> Is there an objection to openldap? It's lightweight (so could
> reasonably be run on t
something failed unexpectedly, you still have the report).
There are ways you can trigger periodic activity in the database, but to
be honest, CRON is easy and reliable and avoids needing to add
additional extensions etc to the DB. Turning off the report, changing
the time etc, is a simple crontab edit.
--
Tim Cross
e database and writing
JSON queries using PG's SQL JSON support).
--
Tim Cross
ind this a sign
you could be re-structuring your query to be a join between two tables
where one table contains the IDs of interest rather than trying to embed
them into the query as part of a where clause.
--
Tim Cross
Zwettler Markus (OIZ) writes:
> Hi Marco,
>
>
>
> How do you handle these conflicts? No longer updating that regularly or not
> at all anymore?
>
Not doing the updates is a poor option due to the potential security
vulnerabilities this may lead to. Likewise, delaying the application of
y primary
development environments and will use psql and sqlplus before Taod,
pgAdmin, sqlDeveloper etc.
Tim
P.S. for moving Oracle databases, we use to just use sed and change the
paths in the control file. Worked remarkably well. Often used this
technique to 'refresh' our dev or testing systems to current prod data.
--
Tim Cross
for all the DBA
stuff, Oracle is nice to work with. However, you tend to only be in that
situation when your working in a large, usually bureaucratic,
environment, which tends to detract from the whole experience in other
ways. If your unlucky enough to also be using any of the Oracle 'value
add' extensions, development frameworks, application layers etc, it is
really horrible and mind numbing.
apart from this, Oracle licensing is an absolute nightmare. Apart from
the expense, the complexity is unbelievable and it is almost impossible
to know with any certainty what you will be paying in 12, 24 or more
months.
--
Tim Cross
to protect against accidental
modification of the data or have reasonable confidence (for some measure
of reasonable), just having a checksum hash may be sufficient. big
advantage with the simpler case with no actual data encryption is that
other clients can access/use the data and not require access to
> as "2019-07-10 10:56:43.21" (change in Year).
>
> What could be the issue? we tried changing the default value to
> "localtimestamp".
>
My bet would be you have some SQL statements which include a value for
'createddate', so the default is not being used.
--
Tim Cross
, and poke at
> what the *real* requirement is there, and why someone thinks that the
> values should be "somewhat coordinated." Something seems off there.
I agree and was going to write something similar. All the 'solutions'
are problematic in one way or the other and seem to be due to a
misconception about the role for sequences or some requirement which
needs to be re-examined.
--
Tim Cross
ll allow you to set a max pool size. In addition to
enabling you to 'reserve' a set number of connections for a client, you
will know which client seems to be running out of connections, helping
to identify the culprit.
--
Tim Cross
Bee.Lists writes:
>> On Jun 23, 2020, at 8:09 PM, Tim Cross wrote:
>>
>> Sounds like your web app may not be closing connections once it has
>> finished with them? The fact your seeing idle connections would seem to
>> support this. I would be verifying t
s in developers
not understanding the power of the underlying RDMS and encourages poor
DB schema design. Those who support such technologies typically point to
the benefits of database neutrality such systems can provide. In over 30
years of DB work, I have yet to see such neutrality actually work. It is
a pipe dream.
--
Tim Cross
specifically too slow for
acceptable performance, whatever that is). Assuming you will need to
take lots of special action may be premature - you may need to do none
or only a couple of things. Get a baseline first and you will know how
big of an issue you have. You will also be able to determine if what you
try has any benefit.
--
Tim Cross
last 'nextval' (up
until maxvalue). It is unaware of the use i.e. whether it will be used
in a insert or what table that insert is against. So I'm not sure what
your concern with a partitioned table is? Can you elaborate?
--
Tim Cross
You cannot make any additional
assumptions e.g. cannot assume gid values will be inserted in order or
there won't be 'gaps ' etc.
--
Tim Cross
nd them which are almost certainly going to be even
worse from a security perspective.
--
Tim Cross
to find a good solution to that issue. It is
probably something which needs to be built into a tool. In the past,
I've used a modified sqitch approach that also maintains a small 'dbadm'
schema containing metadata to track dependencies. Although this worked
OK, especially if you understood how all the bits fit together, it still
had many corner cases and to some extent highlighted the complexities involved.
--
Tim Cross
st, where changes can be tracked, analysed and
rolled
back, where re-factoring can use advanced tools and can work across
projects, not just on a single script, where code sharing is easy and
where people can use their preferred tool rather than being forced to
use something which understands the database.
--
Tim Cross
1 - 100 of 120 matches
Mail list logo