[SQL] Customised Encoding
Dear Friends, Postgres 7.3.2 Runing on Linux 7.2 I would like to convert any values in between 1 to 2^16 into a single charactor. And decode that single char to get back the numeric value again. Any function to do that in Postgres SQL Server. Please shed some light. Thanks Kumar
[SQL] Postgres as backend to Backup System
Hello,
I don't know if this is the forum for this but here goes.
I am interested in using Postgres as the backend to a backup system.
Does anyone have any experiences or ideas on this? I want to use
Postgres to store information about files, directories, archives etc
written to tape. This is the typical types of information that I feel
need to be stored in postgres:
Tape ID
Location of tape in autoloader magazine
Directory file sizes
Total Archives on tape
Total bytes in archive
Archive location of a file or directory on a tape
Total bytes on tape
Date archive was written to tape
Server associated with an archive
Absolute path to file or directory on tape
My log files are generated by using the "v" option of the "tar" command.
These create daily log files that are 6-8 mb that list every file that
is backed up. This comes out to 75,000 lines per day. If you had an
autoloader that you cycled through with 10 tapes for example, that could
contain 750,000 entries.
My system backups up anything that can run rsync. For me right now that
is Linux servers, Novell servers, MAC running OSX, and Windows servers.
Because there are many types of servers the database should be able to
store which server,archive number a file or directory is in.
If you were to search in the database for a file or directory, it would
return a list that gave you the tape(s), date(s), archive(s) number on
tape, etc.
Commercial systems use backend SQL servers. I believe Veritas Backup
Exec uses MSSQL, Arcserve uses a backend database (don't know the type).
Any ideas would be appreciated.
This is what I have so far:
# Database for backup system
create sequence ftid_seq start 1 increment 1;
create sequence did_seq start 1 increment 1;
create sequence archiveid_seq start 1 increment 1;
create sequence deviceid_seq start 1 increment 1;
create sequence tid_seq start 1 increment 1;
create table tapedevice(
deviceid integer not null default nextval('deviceid_seq'::text),
server varchar(20) not null,
devicename varchar(20),
numtapes integer,
drivename varchar(20),
autoloader boolean,
constraint deviceid_pk primary key(deviceid)
);
create table tapes(
tid integer not null default nextval('tid_seq'::text),
tapeid varchar(20) not null,
numwrittento integer,
currslot integer,
deviceid integer references tapedevice(deviceid)
match full
on update cascade
on delete cascade,
totalbytes int8,
numberarchives integer,
constraint tid_pk primary key(tid)
);
create table filetable(
ftid int8 not null default nextval('ftid_seq'::text),
archiveid integer not null references tapes(tid)
match full
on update cascade
on delete cascade,
absolutepath varchar(200),
constraint ftid_pk primary key(ftid)
);
create table directorytable(
did int8 not null default nextval('did_seq'::text),
archiveid integer not null references tapes(tid)
match full
on update cascade
on delete cascade,
absolutepath varchar(200),
constraint did_pk primary key(did)
);
create table archives(
archiveid integer not null default
nextval('archiveid_seq'::text),
tid integer references tapes(tid)
match full
on update cascade
on delete cascade,
totalbytes int8 not null,
totalfiles int8,
archivestartdir varchar(100),
fromserver varchar(20),
datewritten date,
constraint archiveid_pk primary key(archiveid)
);
--
Kent L. Nasveschuk <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[SQL] Performance issue
Howdy,
I am expiriencing some performance issues, on a table in my postgres db.
I cron script is being run every night (while very low db activity),
that deletes all rows from the table, and injects a bunch of new data...
Every day between 60.000 and 100.000 rows.
Info:
ppdb=> select version();
version
PostgreSQL 7.4.1 on i386-portbld-freebsd5.2, compiled by GCC cc (GCC)
3.3.3 [FreeBSD] 20031106
(1 row)
ppdb=> \d itxhas
Table "public.itxhas"
Column|Type |
Modifiers
-+-+
hasid | integer | not null default
nextval('itxhas_hasid_seq'::text)
partno | character varying(60) |
mfg | character varying(25) |
qty | character varying(20) |
condition | character varying(20) |
gescode | character varying(10) |
cmup| numeric(14,2) |
create_date | timestamp without time zone |
posted_date | timestamp without time zone |
status | character varying(15) |
company_id | integer |
die_date| timestamp without time zone |
Indexes:
"itx_create_date_idx" btree (create_date
Every time I need to select something from this table, I feel it takes
way too long..
I have run vacuum analyze without improvement.
ppdb=> explain select count(*) from itxhas;
QUERY PLAN
Aggregate (cost=78472.86..78472.86 rows=1 width=0)
-> Seq Scan on itxhas (cost=0.00..78253.09 rows=87909 width=0)
(2 rows)
ppdb=> explain analyze select count(*) from itxhas;
QUERY PLAN
--
Aggregate (cost=78472.86..78472.86 rows=1 width=0) (actual
time=24242.717..24242.719 rows=1 loops=1)
-> Seq Scan on itxhas (cost=0.00..78253.09 rows=87909 width=0)
(actual time=23763.883..24122.221 rows=87909 loops=1)
Total runtime: 24242.844 ms
(3 rows)
Is this normal ? If I run the same select on another table in the same
database with ~40.000 rows, it takes approx 820.00ms...
Any ideas ?
/mich
--
Best Regards,
Michael L. Hostbaek
*/ PGP-key available upon request /*
---(end of broadcast)---
TIP 8: explain analyze is your friend
Re: [SQL] Logical comparison on Strings
kumar wrote: Dear Friends, Postgres 7.3.2 on Linux 7. I want to compare to columns and get the logical result as follows. C1 is 'YNYNY' . C2 is 'NNYYY'. I want to compare like AND and OR operators. C1 AND C2 should give result like NNYNY. C1 OR C2 should give result like YNYYY. Bit String Types in PostgreSQL may be what you are looking for. btw: what is linux 7? Please shed some light. Thanks Kumar ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Performance issue
On 27/04/2004 10:12 Michael L. Hostbaek wrote: [snip] Is this normal ? If I run the same select on another table in the same database with ~40.000 rows, it takes approx 820.00ms... You would probably get better answers on the correct list but my guess is that your fsm setting might be too low for the number of dead tuples you are trying to reclaim so the vacuum is not clearing all the dead stuff out. There's plenty of stuff about this in the archives and the docs. -- Paul Thomas +--+-+ | Thomas Micro Systems Limited | Software Solutions for Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+-+ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Customised Encoding
On Tuesday 27 April 2004 07:54, kumar wrote: > Dear Friends, > > Postgres 7.3.2 Runing on Linux 7.2 > > I would like to convert any values in between 1 to 2^16 into a single > charactor. And decode that single char to get back the numeric value again. > Any function to do that in Postgres SQL Server. You could use unicode (e.g. UTF-8) encoding. I don't see what it gains you though. What do you hope to achieve by storing numerical data as characters? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] How to get binary value for an Integer
Dear Friends, I am using postgres 7.3.2. I wanna get the binary value of number 65536. Is there anyway to get that postgres functions. Thanks kumar
Re: [SQL] Proper SQL syntax requested
On Thursday 22 April 2004 00:41, Blake wrote: > > Select * from floors Where system = 1 ORDER by make > > This sorts by the Serial ID's in column make > > I would like to be able to sort by the actual Names associated back > from the ID's. Anyway of doing this?? You don't say which table contains the name in question. Assuming we want column make_name from table make_description with an id of make_id then: SELECT floors.* FROM floors, make_description WHERE system=1 AND floors.make = make_description.make_id ORDER BY make_description.make_name; This is called a join. Any book on databases/SQL should discuss this sort of stuff. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] How to get binary value for an Integer
O kyrios kumar egrapse stis Apr 27, 2004 : > Dear Friends, > > I am using postgres 7.3.2. I wanna get the binary value of number 65536. Is there > anyway to get that postgres functions. SELECT 65536::bit(32); (assuming you have in mind a 32 bit arch). > > Thanks > kumar > -- -Achilleus ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Customised Encoding
On Tuesday 27 April 2004 13:55, kumar wrote:
> Hi Richar,
Kumar - try to make sure you reply to the list.
> It didnt work for me.
>
> select encode('65536'::bytea,'UTF-8')
> ERROR: Cannot cast type integer to bytea
>
> select encode('65536'::bytea,'UTF-8')
> ERROR: No such encoding as 'UTF-8'
I think you're using the encode() function wrongly. The second parameter is
supposed to be something like "base64" or "hex". I'm not sure it makes sense
to try and cast an integer to a bytea either.
Can you say what you're trying to achieve here?
--
Richard Huxton
Archonet Ltd
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] Postgres as backend to Backup System
Hello,
I use mtx and mt commands to load and unload tapes from the autoloader
and the drive unit. The system has evolved and is fairly elaborate. This
is an automated cron job that rotates through the magazine, writing to
tape sending me the log files that describes what archives, files and
directories are on any particular tape.
For the MACs OSX I run a cron job that creates tar archives. Rsync comes
on OSX. You can't use the normal tar command because it deletes the
resource forks that the MAC file system uses. There is a hacked version
available on the Internet that is called hfstar and works exactly the
same way. I grab these archives off hours using rsync and send them to
tape.
A version of cron and rsync also works on Netware 5.1. I use cron to
shutdown services and restart after they have been backed up. Rsync
works the same as on Linux.
The problem that I ran into is that all of thes log files are mailed to
me on a continual basis. I am familiar with how everything goes
together, but it isn't accessible to anyone else. I wanted to develop a
web based basckup system with backend database using Postgres to store
information about the location of files on tapes.
I also wanted to make the scope of this broad enough so that if you had
different types of autoloaders or stand alone tape drive at multiple
locations you could store that information in there also.
On Tue, 2004-04-27 at 10:15, Theodore Petrosky wrote:
> Kent,
>
> I am very interested in this project. I am setting up
> a job tracking system (postgresql as the backend) for
> an ad agency and I have been looking at archiving the
> work files. (All Mac OS X) How are you talking to the
> tape juke box?
>
> Postgresql would be great for your project.
>
> JMHO
>
> Ted
>
> --- "Kent L. Nasveschuk"
> <[EMAIL PROTECTED]> wrote:
> > Hello,
> > I don't know if this is the forum for this but here
> > goes.
> >
> > I am interested in using Postgres as the backend to
> > a backup system.
> > Does anyone have any experiences or ideas on this? I
> > want to use
> > Postgres to store information about files,
> > directories, archives etc
> > written to tape. This is the typical types of
> > information that I feel
> > need to be stored in postgres:
> >
> > Tape ID
> > Location of tape in autoloader magazine
> > Directory file sizes
> > Total Archives on tape
> > Total bytes in archive
> > Archive location of a file or directory on a tape
> > Total bytes on tape
> > Date archive was written to tape
> > Server associated with an archive
> > Absolute path to file or directory on tape
> >
> >
> > My log files are generated by using the "v" option
> > of the "tar" command.
> > These create daily log files that are 6-8 mb that
> > list every file that
> > is backed up. This comes out to 75,000 lines per
> > day. If you had an
> > autoloader that you cycled through with 10 tapes for
> > example, that could
> > contain 750,000 entries.
> >
> > My system backups up anything that can run rsync.
> > For me right now that
> > is Linux servers, Novell servers, MAC running OSX,
> > and Windows servers.
> > Because there are many types of servers the database
> > should be able to
> > store which server,archive number a file or
> > directory is in.
> >
> > If you were to search in the database for a file or
> > directory, it would
> > return a list that gave you the tape(s), date(s),
> > archive(s) number on
> > tape, etc.
> >
> > Commercial systems use backend SQL servers. I
> > believe Veritas Backup
> > Exec uses MSSQL, Arcserve uses a backend database
> > (don't know the type).
> >
> > Any ideas would be appreciated.
> >
> > This is what I have so far:
> >
> > # Database for backup system
> >
> > create sequence ftid_seq start 1 increment 1;
> > create sequence did_seq start 1 increment 1;
> > create sequence archiveid_seq start 1 increment 1;
> > create sequence deviceid_seq start 1 increment 1;
> > create sequence tid_seq start 1 increment 1;
> > create table tapedevice(
> > deviceid integer not null default
> > nextval('deviceid_seq'::text),
> > server varchar(20) not null,
> > devicename varchar(20),
> > numtapes integer,
> > drivename varchar(20),
> > autoloader boolean,
> > constraint deviceid_pk primary key(deviceid)
> > );
> >
> > create table tapes(
> > tid integer not null default
> > nextval('tid_seq'::text),
> > tapeid varchar(20) not null,
> > numwrittento integer,
> > currslot integer,
> > deviceid integer references tapedevice(deviceid)
> > match full
> > on update cascade
> > on delete cascade,
> > totalbytes int8,
> > numberarchives integer,
> > constraint tid_pk primary key(tid)
> > );
> >
> > create table filetable(
> > ftid int8 not null default
> > nextval('ftid_seq'::text),
> > archiveid integer not null references
> > tapes(tid)
> > match full
> > on update
Re: [SQL] Postgres as backend to Backup System
Michael, Thanks for responding. On Tue, 2004-04-27 at 10:38, Michael Satterwhite wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On Tuesday 27 April 2004 03:39, Kent L. Nasveschuk wrote: > > Hello, > > I don't know if this is the forum for this but here goes. > > > > I am interested in using Postgres as the backend to a backup system. > > Does anyone have any experiences or ideas on this? I want to use > > Postgres to store information about files, directories, archives etc > > written to tape. This is the typical types of information that I feel > > need to be stored in postgres: > > > > Tape ID > > Location of tape in autoloader magazine > > Directory file sizes > > Total Archives on tape > > Total bytes in archive > > Archive location of a file or directory on a tape > > Total bytes on tape > > Date archive was written to tape > > Server associated with an archive > > Absolute path to file or directory on tape > > > > > > My log files are generated by using the "v" option of the "tar" command. > > These create daily log files that are 6-8 mb that list every file that > > is backed up. This comes out to 75,000 lines per day. If you had an > > autoloader that you cycled through with 10 tapes for example, that could > > contain 750,000 entries. > > > > My system backups up anything that can run rsync. For me right now that > > is Linux servers, Novell servers, MAC running OSX, and Windows servers. > > Because there are many types of servers the database should be able to > > store which server,archive number a file or directory is in. > > > > If you were to search in the database for a file or directory, it would > > return a list that gave you the tape(s), date(s), archive(s) number on > > tape, etc. > > > > Commercial systems use backend SQL servers. I believe Veritas Backup > > Exec uses MSSQL, Arcserve uses a backend database (don't know the type). > > > > Any ideas would be appreciated. > > > > I'm doing this now using MySQL (I'm converting it to Postgres). A couple of > thoughts on your structure. > Good idea I'll eliminate it. > You really don't need to store total archives or total bytes as this can be > retrieved at any time by select sum(). > Now, do you keep any information on tapes that have been overwritten? Are you writing to more than one tape in a backup session? What else does your controls table store? > I have a controls table that gives me (among other things) the maximum number > of backup sets to keep. That allows me to automatically cycle through the > sets. > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.2.2-rc1-SuSE (GNU/Linux) > > iD8DBQFAjnBZjeziQOokQnARAgMVAKCXRuMJYTpvAp6w6xeCePdt1AG+sQCeL0ij > 2Jg64Fhsu8FIstI8Rm2Tuio= > =Yid4 > -END PGP SIGNATURE- -- Kent L. Nasveschuk <[EMAIL PROTECTED]> ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] SELECTing part of a matrix
I've got a 3D matrix stored in a table. I'd like to pull back just 2 of the dimensions (specifying a constant value for the 3rd dimension). e.g. table1 == matrix1[5][10][20] I've like to get matrix[1][all][all] and have it pulled back as a 2D matrix (new_matrix1[all][all]). Any way to do this? -Tony ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] date arithmetic over calender year boundaries
Hi, The following Postgres 7.1 query extracts aggregated data for an arbitrary period within each year for sites in a table containing 30 years of temperature data. topo=> \d longterm Table "longterm" Attribute | Type | Modifier ---+--+-- site | character(5) | not null obs | date | not null lo| numeric(3,1) | hi| numeric(3,1) | topo=> select site, extract(year from obs) as year, sum((hi+lo)/2-4) as gdd4 topo=> from temperature topo=> where extract(doy from obs) >= 1 topo=> and extract(doy from obs) <= 5 topo=> group by site, extract(year from obs); site | year | gdd4 ---+--+ 01001 | 1973 | 51.7 01001 | 1974 | 39.5 01001 | 1975 | 67.9 . .. . .. My question is, how can this type of query be contructed to do the same sort of thing for a period that straddles the calendar year boundary? -- Regards, +--+--+ Ray Jacksonemail: [EMAIL PROTECTED] Computing Coordinator phone: +64-3-479-8768 Dept. Geography/Te Ihowhenua fax: +64-3-479-8706 Otago University postal: Box 56, Dunedin Te Whare Wananga o Otago AOTEAROA/NEW ZEALAND +--+--+ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Which SQL command creates ExclusiveLock?
"Denis Khabas" <[EMAIL PROTECTED]> writes: > According to postgres documentation, all update operations place ROW EXCLUS= > IVE MODE locks, and=20 > EXCLUSIVE MODE is not automatically acquired by any postgres SQL command. S= > o, which command places > ExclusiveLock? It says that no SQL command acquires ExclusiveLock *on a table*. The pg_locks row you show represents ExclusiveLock on a transaction number. Every transaction gets ExclusiveLock on its transaction number for the duration of its existence. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Customised Encoding
The idea is I wanna store any number from 1 to 2^16 in 2 strings only - This
is the requirement.
Since in Unicode a 16 bit is represented in a Single chararctor.
So i wanna convert any number into a 32 bit and then convert each 16 bit
into one char and stored it in database.
So I wanna know how to convert any 16 bit to a single charactor.
Is there any function to do that in Postgres.
Thanks
Kumar
- Original Message -
From: "Richard Huxton" <[EMAIL PROTECTED]>
To: "kumar" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, April 27, 2004 9:16 AM
Subject: Re: [SQL] Customised Encoding
> On Tuesday 27 April 2004 13:55, kumar wrote:
> > Hi Richar,
>
> Kumar - try to make sure you reply to the list.
>
> > It didnt work for me.
> >
> > select encode('65536'::bytea,'UTF-8')
> > ERROR: Cannot cast type integer to bytea
> >
> > select encode('65536'::bytea,'UTF-8')
> > ERROR: No such encoding as 'UTF-8'
>
> I think you're using the encode() function wrongly. The second parameter
is
> supposed to be something like "base64" or "hex". I'm not sure it makes
sense
> to try and cast an integer to a bytea either.
>
> Can you say what you're trying to achieve here?
>
> --
> Richard Huxton
> Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
[SQL] query optimizer dont treat correctly OR
Hello folks See the command bellow. I use some thing simmilar about an decade on Oracle, Sybase, MSSQL, DB2, etc. But with Postgresql , he generate an FULL TABLE SCAN, and consequenyly it take about 10 minutes to run (Very big table..) -- SELECT A.SR_RECNO , A.CDEMP, A.NRPED,A.SR_RECNO FROM FTB01 A WHERE ( A.CONTROLE <= ' ' AND A.CDEMP = '75' AND A.NRPED < '261' ) OR ( A.CONTROLE = ' ' AND A.CDEMP < '75' ) OR ( A.CONTROLE < ' ' ) ORDER BY A.CDEMP DESC, A.NRPED DESC, A.SR_RECNO DESC LIMIT 170 -- Otherwise, is i write the query on the form of an little more "dummy" and eliminating the "OR" and changing by UNION, the time of execution drops to less menos of two seconds -- SELECT TMP1.* FROM ( SELECT A.SR_RECNO, A.CDEMP, A.NRPED,A.SR_RECNO FROM FTB01 A WHERE ( A.CONTROLE <= ' ' AND A.CDEMP = '75' AND A.NRPED < '261' ) ORDER BY A.CDEMP DESC, A.NRPED DESC, A.SR_RECNO DESC LIMIT 170 ) TMP1 UNION SELECT TMP2.* FROM ( SELECT A.SR_RECNO, A.CDEMP, A.NRPED,A.SR_RECNO FROM FTB01 A WHERE ( A.CONTROLE = ' ' AND A.CDEMP < '75' ) ORDER BY A.CDEMP DESC, A.NRPED DESC, A.SR_RECNO DESC LIMIT 170 ) TMP2 UNION SELECT TMP3.* FROM ( SELECT A.SR_RECNO, A.CDEMP, A.NRPED,A.SR_RECNO FROM FTB01 A WHERE OR ( A.CONTROLE < ' ' ) ORDER BY A.CDEMP DESC, A.NRPED DESC, A.SR_RECNO DESC LIMIT 170 ) TMP3 ORDER BY CDEMP DESC, NRPED DESC, SR_RECNO DESC LIMIT 170 -- The comand above works (even being 10 x slower then other Databases ) with our generate the full scan. Why Post do this wrong julgment with the initial command? Exist some thing that i can configure to to make postgres works correctly ? Obs.: * Tested on versions 7.3.2 e 7.4.1 * Obvialy the vacuumm full analyse was executed Thanks Luiz ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] query optimizer dont treat correctly OR
On Tuesday 27 April 2004 15:48, Luiz Rafael Culik Guimaraes wrote: > Hello folks > > See the command bellow. I use some thing simmilar about an decade on > Oracle, > Sybase, MSSQL, DB2, etc. But with Postgresql , he generate an FULL TABLE > SCAN, > and consequenyly it take about 10 minutes to run (Very big table..) > Why Post do this wrong julgment with the initial command? > Exist some thing that i can configure to to make postgres works correctly ? You'll need to post the output of EXPLAIN ANALYSE for your query. This will show how many rows PG thinks it will get back compared to how many it actually gets back. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend
