lsewhere?
>
>
>
> On Apr 11, 2017, at 5:17 PM, Dorian Hoxha <dorian.ho...@gmail.com> wrote:
>
> If you are asking if you should go nosql, 99% you should not.
>
> On Tue, Apr 11, 2017 at 10:06 PM, Poul Kristensen <bcc5...@gmail.com>
> wrote:
>
>> dat
If you are asking if you should go nosql, 99% you should not.
On Tue, Apr 11, 2017 at 10:06 PM, Poul Kristensen wrote:
> dataverse.org uses Postgresql and is well documented + it is completely
> user driven. Maybe the concept could be usefull for you. I have installed
> and
Just share the slides/video in this thread friend.
On Sat, Jan 21, 2017 at 10:57 AM, Seref Arikan
wrote:
> Any chance this will be recorded? The content looks great and would be of
> interest to many.
>
> Cheers
> Seref
>
>
> On Sat, Jan 21, 2017 at 8:55 AM, Chris Travers
Hello friends,
When updating row that has TOAST column, is the TOAST column also inserted
? Or just the oid?
Say I have a 1MB value in the TOAST column, and I update the row by
changing another column, and since every update is an insert, will it also
reinsert the toast-column ? The column that I
@Aleksander
~everyone wants lower data storage and wants some kind of compression.
Can this be made to automatically retrain when analyzing (makes sense?)?
And create a new dictionary only if it changes compared to the last one.
On Tue, Oct 4, 2016 at 5:34 PM, Aleksander Alekseev <
If the connection is in autocommit, then each statement will also incur a
commit (write to the commit log on disk).
On Fri, Sep 23, 2016 at 2:01 PM, Rakesh Kumar
wrote:
> Hi
>
> I am noticing that if I do this
>
> insert into table values(1,a)
> insert into table
Check out Voltdb (or Scylladb which is more different) for the changes in
architecture required to achieve those performance increases.
On Fri, Sep 2, 2016 at 7:32 PM, Andres Freund wrote:
> On 2016-09-02 11:10:35 -0600, Scott Marlowe wrote:
> > On Fri, Sep 2, 2016 at 4:49
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe wrote:
> Honestly, I've never heard of anyone doing that. But it
@Konstantin
1. It's ok in my cases.
2. Not required in my cases.
3. Just require users to use different servers for now I think.
Sometimes(always?) users can be greedy with feature requests.
4. I want magically consistency + failover (I can instruct the client to
retry all masters).
Good-cluster
Is there any database that actually supports what the original poster
wanted ?
The only thing that I know that's similar is bigtable/hbase/hypertable wide
column store.
The way it works is:
break the lexicographically sorted rows into blocks of compressed XXKB, and
then keeps an index on the
Happy Holidays!
Let's have automatic sharding and distributed transactions!
On Fri, Jan 1, 2016 at 3:51 PM, Melvin Davidson
wrote:
> Happy New Year to all!
>
> On Fri, Jan 1, 2016 at 2:40 AM, Michael Paquier > wrote:
>
>> On Fri, Jan 1, 2016
1,2,3: You can't shard with BDR. It's only for multimaster (at least for
now). Please read the docs.
On Fri, Jul 17, 2015 at 9:02 AM, Amit Bondwal bondwal.a...@gmail.com
wrote:
Hello everyone,
We ae working on a application in which we are using posgresql as a
database. We are sure that in
Please do reply-all so you also reply to the list.
It's not ~good to develop with sqlite and deploy on posgresql. You should
have your 'dev' as close to 'prod' as possible.
Product_feature is another table in this case ?
On Tue, Jun 2, 2015 at 11:44 AM, Adrian Stern adrian.st...@unchained.ch
to limit keys to specific groups of
products.
Freundliche Grüsse
Adrian Stern
unchained - web solutions
adrian.st...@unchained.ch
+41 79 292 83 47
On Tue, Jun 2, 2015 at 12:58 PM, Dorian Hoxha dorian.ho...@gmail.com
wrote:
Please do reply-all so you also reply to the list.
It's not ~good
What about keeping all the dynamic columns of each product in a json(b)
column ?
Maybe you can make constraints that check the product_type and
json-field-type ?
On Mon, Jun 1, 2015 at 4:35 PM, Adrian Stern adrian.st...@unchained.ch
wrote:
Hi, I'm new
I've been working as the sole
That's spam. Can an admin ban this user/email ?
On Fri, May 1, 2015 at 8:22 AM, recoverdata susanhsulliv...@gmail.com
wrote:
When a file is deleted from your computer, its contents aren't immediately
destroyed. Windows simply marks the hard drive space as being available for
use by changing
Hi Jeff,
Looks good. Some questions:
- how are data stored? btree, lsm etc?
- indexes on the views
ex doing a view of counting daily views for each url I may need to query
by day or by url (table scans?)
- distribution / sharding?
how will stuff be consistent if the view's
I don't see how it could have negative impact on the postgresql project?
It's not like your job will be to find vulnerabilities and not disclose
them ?
On Wed, Mar 11, 2015 at 1:28 PM, Bill Moran wmo...@potentialtech.com
wrote:
I've been asked to sign a legal document related to a PostgreSQL-
Thanks John.
On Thu, Aug 28, 2014 at 2:35 PM, John McKown john.archie.mck...@gmail.com
wrote:
On Mon, Aug 18, 2014 at 10:52 AM, John McKown
john.archie.mck...@gmail.com wrote:
SELECT avg(b.countcountry)::int as CountryCount, b.country, a.city,
count(a.city) as CityCount
FROM t AS a
I have CREATE TABLE t (country text, city text);
I want to get with 1 query,
select count(country),country GROUP BY country ORDER BY count(country) DESC
And for each country, to get the same for cities.
Is it possible ?
Thanks
Try to use:
dont catch the exception when you make the connection, to see the right
error
because: i am unable to connect may mean different things: 1.wrong user
2.wrong pass 3.server down etc
On Fri, May 16, 2014 at 12:12 PM, image lcel...@latitude-geosystems.comwrote:
Dear all,
I'm
Also remove the first try + remove the space before conn= so you have
this:
#!/Python27/python.exe
import psycopg2
# Try to connect
conn=psycopg2.connect(dbname='busard_test' user='laurent'
host='localhost' password='cactus')
cur = conn.cursor()
On Fri, May 16, 2014 at 12:41 PM,
Since I can't understand(french?) the language, what does it mean? Probably
wrong authentication(password?).
On Fri, May 16, 2014 at 1:19 PM, image lcel...@latitude-geosystems.comwrote:
Thanks for your help.
So i remove the first try + i remove the space before conn=
Indeed i have a new
If you don't do read queries on the slave than it will not have hot
data/pages/rows/tables/indexes in ram like the primary ? (it smoked weed
and was happy doing nothing so it was happy, but when responsibility came
(being promoted to master) it failed hard)
On Thu, May 15, 2014 at 6:46 AM, Kevin
Search for fulltext tutorial + json functions
http://www.postgresql.org/docs/9.3/static/functions-json.html
On Wed, May 14, 2014 at 1:00 AM, Jesus Rafael Sanchez Medrano
jesusraf...@gmail.com wrote:
thanks... could you please be so kind to post some snippet/code for this?
Att.
==
Jesus
functions will come with jsonb in 9.4).
Peeyush Agarwal
On Tue, May 13, 2014 at 3:13 PM, Dorian Hoxha dorian.ho...@gmail.comwrote:
Why not store session as integer?
And timestamp as timesamp(z?) ?
If you know the types of events, also store them as integer , and save a
map of them
Why not store session as integer?
And timestamp as timesamp(z?) ?
If you know the types of events, also store them as integer , and save a
map of them in the app or on another table ?
And save the parameters as a json column, so you have more data-types?
Hstore only has strings.
Be carefull
So :
1. drop function
2. alter type: add column
3. create again function with new default argument in a transaction ?
On Tue, Apr 29, 2014 at 4:22 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Sun, Apr 27, 2014 at 4:57 PM, Dorian Hoxha dorian.ho...@gmail.com
wrote:
Since my
Hi list,
I am trying to use postgresql as a queue for long-jobs (max ~4 hours) using
advisory_locks. I can't separate the long-job into sub-jobs.
1. At ultimate-best-case scenario there will be ~100 workers, so no
web-scale performance required.
Is there a problem with 100 open
I'll probably ask @pgbouncer mailing list if i can use it with
advisory_locks per session. If not, even raw sessions will be enough.
Some comments inline.
Thanks
On Sun, Apr 27, 2014 at 10:07 PM, David G Johnston
david.g.johns...@gmail.com wrote:
Dorian Hoxha wrote
Hi list,
I am trying
of
composite_types by not specifying all of the columns for each
composite_type ?
So if i later add other columns to the composite_type, the insert query
doesn't break ?
Thanks
On Mon, Apr 21, 2014 at 1:46 PM, Dorian Hoxha dorian.ho...@gmail.comwrote:
Maybe the char array link is wrong ? I don't
Currently hstore is mongodb. It writes the keys everytime (and values as
strings!, its mostly for dynamic keys or very sparse keys in my opinion).
You can shorten keys,or put them in dedicated columns.
I haven't read that there is a plan to compress the strings.
On Tue, Apr 22, 2014 at 2:01 PM,
top-posted (Dang iPhone). Continued below:
On 04/20/2014 05:54 PM, Dorian Hoxha wrote:
Because i always query the whole row, and in the other way(many tables) i
will always join + have other indexes.
On Sun, Apr 20, 2014 at 8:56 PM, Rob Sargent robjsarg...@gmail.comwrote:
Why do you think
Hi list,
I have a
create type thetype(width integer, height integer);
create table mytable(thetype thetype[]);
How can i make an insert statement so if i later add fields to the
composite type, the code/query doesn't break ?
Maybe by specifying the fields of the composite type in the query ?
, you could consider
using json or hstore if the data is unstructured.
El 20/04/2014 14:04, Dorian Hoxha dorian.ho...@gmail.com escribió:
Hi list,
I have a
create type thetype(width integer, height integer);
create table mytable(thetype thetype[]);
How can i make an insert statement so if i
immune to to most future type changess.
Sent from my iPhone
On Apr 20, 2014, at 11:57 AM, Dorian Hoxha dorian.ho...@gmail.com wrote:
Was just curious about the overhead.
I know the columns, but i may need to add other columns in the future.
Yeah, json is the alternative if this doesn't work
Postgresql has 2 column store, 1-in memory(cant remember the name) and
http://www.citusdata.com/blog/76-postgresql-columnar-store-for-analytics
On Sat, Apr 19, 2014 at 2:10 PM, Robin robin...@live.co.uk wrote:
bottom post
On 19/04/2014 12:46, R. Pasch wrote:
On 19-4-2014 9:38, Robin wrote:
Cache the total ?
On Thu, Apr 3, 2014 at 3:34 PM, Leonardo M. Ramé l.r...@griensu.com wrote:
Hi, in one of our systems, we added a kind of pagination feature, that
shows N records of Total records.
To do this, we added a count(*) over() as Total field in our queries
in replacement of doing
Link to hackernews which also has some comments from the devs
https://news.ycombinator.com/item?id=7523950
Very interesting: They use foreign data tables as an abstraction to
separate the storage layer from the rest of the database.
I'll probably go by using 3 queries and putting them in a transaction.
Thanks
On Wed, Nov 27, 2013 at 5:38 PM, David Johnston pol...@yahoo.com wrote:
Dorian Hoxha wrote
Hi,
So i have (table where data will be read) :
CREATE TABLE data (vid,cid,pid,number);
Tables where data
Hi,
So i have (table where data will be read) :
CREATE TABLE data (vid,cid,pid,number);
Tables where data will be writen/updated:
CREATE TABLE pid_top_vids (pid, vid[])
CREATE TABLE pid_top_cids (pid, cid[])
CREATE TABLE cid_top_vids (cid, vid[])
I need to , possibly in 1 query, this will run
I have: create table tbl (a,b,c,d,e,f,g,h);
And i need to select in 1 query ,or the most performant way:
top 5(a)
top 5(b)
top 5(c): for each top5(c): top 5(d)
count(f) GROUP BY f
I can make these in separate queries but that means that postgresql would
read the table multiple-times?
Is it
Is it possible to:
SELECT * FROM table
But to return only non-null columns ?
Since i use psycopg2 with DictCursor (a hashtable) it's better for me when
i don't have the column that to have it as NULL.
Thanks
43 matches
Mail list logo