QT6 Package Requests

2024-01-22 Thread Peter Willis

Hello,

Some parts of QT6  appear to be missing from the package under 22.04.

Notably the following:

QTextToSpeech
QBluetooth..

QGeoRoute

Qt6::Location
Qt6::Position


Thanks,


P Wills

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


[bug #55093] Add LUKS2 support

2022-01-16 Thread Peter Willis
Follow-up Comment #10, bug #55093 (project grub):

It seems that LUKS2 support has been implemented, but there also seems to be
bugs in the implementation
(https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=945404). My suggestion is
to close this bug and open a new one to address the new bugs

___

Reply to this item at:

  

___
  Message sent via Savannah
  https://savannah.gnu.org/




Patch to add support for RFC2324 (HTCPCP)

2021-11-28 Thread Peter Willis
Hi busybox devs, It's been a long time! About 17 years since my last submission 
:-)

I was just trying to make some coffee with busybox, and I noticed it doesn't 
support RFC 2324 (Hyper Text Coffee Pot Control Protocol). Attached is a patch 
that adds support for the standard. Although I should mention it's not full 
support for the standard; I take my coffee black, so I didn't implement WHEN 
and Accept-Additions, but I'm sure someone else can if they need creamer 
(although some Kahlua wouldn't go amiss with this winter weather...)

The patch includes a configuration file option "T" that sets if the host is a 
teapot or not. The default is teapot mode, for portability (coffee brewing 
operations shouldn't happen on a teapot).

Sample operation:

$ echo "T:1" > cgi-bin/httpd.conf 
$ curl -d 'start' -H "Content-Type: application/coffee-pot-command" -X BREW 
http://localhost:6789/cgi-bin/coffeepot
418 I'm a teapot
418 I'm a teapot
The web server is a teapot

$ echo "T:0" > cgi-bin/httpd.conf
$ curl -d 'start' -H "Content-Type: application/coffee-pot-command" -X BREW 
http://localhost:6789/cgi-bin/coffeepot
Brewing coffee!

Also note that the patch fixes a Content-Length bug I found in send_headers():

The function always returns the Content-Length, which is always set to the 
length of a file (for example, if there was a request of a file, the file's 
size is taken - but then some error might be thrown after this point). After 
the Content-Length is set, if infoString was set (the text of a response code) 
the resulting HTML output's length bears no relation to the file size it 
previously set as the Content-Length. Therefore the Content-Length needs to be 
set to either the file size, or the length of the infoString HTML message. The 
patch includes a change to calculate the size of the infoString template and 
return that length if infoString was set.

HTCPCP.diff.gz
Description: application/gzip
___
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox


Who Uses Scientific Linux, and How/Why?

2020-02-24 Thread Peter Willis
Hello,
 
The variation in uses of t Scientific Linux is quite interesting.
As mentioned before, we are using it for fluid dynamics modelling and 
oceanography, in the context of parallel computing with OpenMP and MPICH.
 
I am curious to see what everyone else have been using it for.
 
Perhaps, if it’s not too much trouble, people on the list might give a short 
blurb about how they use it and why.
Maybe also mention others they know who are using it who are not on this list.
 
Peter
 
 
 
>I'm no scientist, just an electronics guy who do a lot of research in RF (as 
>hobby, mostly testing antennas for ham radio in VHF bands) from Argentina.
> 
>Fot SL the most "well done" linux distribution, for people who simply knows.
> 
>Will look forward to move to another distribution.
> 
> 
>>I'm an independent electronics inventor, heavily dependent
>>on both competent software and competent laboratory science,
>>both for the knowledge I depend on and the tools I use to
>>transform that knowledge into products and services for
>>my customers.  
>>


RE: Is Scientfic Linux Still Active as a Distribution?

2020-02-21 Thread Peter Willis
Hello,
 
I can't say I'm negative toward CentOS, I used it back in the late 90s (? Maybe 
early 00s), as an alternative to RedHat at that time.
It's more a familiarity thing.  I have used more Debian based Linux distros 
since the mid 1990s than anything else.
I will certainly look into CentOS as an option. Could be a shorter path to 
completion.
Thanks for reminding of that as an alternative.
 
My friend just toured Fermi Lab and brought me back a lapel pin. I was thankful 
but very envious of her visit there.
It's certainly on my list. 
 
I could probably just retire and, quite happily,  tour all the worlds particle 
accelerator facilities.
Make a nice scrapbook of blueprints.
 
Peter
 
 
 
>Hi,
>I'm surprised by the so negative feeling against CentOS which is a great 
>project too and has been working well since it was "acquired" by Red Hat. I 
>see no >official sign that it should change. Moving from SL to CentOS is 
>straightforward, I don't think you can speak about it as a migration as it is 
>exactly the same >product. And staying with CentOS will give you a chance to 
>meet the DUNE people at some point and more generally the HEP community if you 
>liked >interacting with it!
>Cheers,


RE: Is Scientfic Linux Still Active as a Distribution?

2020-02-21 Thread Peter Willis
Hello,

Thanks to everyone for clarifying the future status of SL.
I guess it's time to start researching he docs for Ubuntu/Debian or something.
 
Looks like we need to revise our computing cluster plan.
The computer here is pretty small with only two nodes and a controller 
totalling 112 CPUs.
We use it for numerical modelling of ocean and river currents and sediment 
transport (OpenMP/MPICH/FORTRAN).
The changeover will be pretty small. We are still waiting for the OK for a new 
node or two.
The current nodes are ten years old. The update to a controller and SL7 was a 
last ditch effort to join the two nodes and increase the scale of the models 
without costing too much more. 
 
In other news, the link you shared has an article about 'DUNE' which seems like 
an interesting project.
I'd certainly frostbite a few toes to just stand around and watch that thing 
run experiments.
 
Thanks for the info,
 
Peter
 
 
>Hello Peter,
> 
>> Is Scientific Linux still active?
>Scientific Linux 6 and 7 will be supported until they are EOL, but there will 
>be no SL8.
> 
>Here is the official announcement from last April:
> 
>https://listserv.fnal.gov/scripts/wa.exe?A2=ind1904 
>
> =SCIENTIFIC-LINUX-USERS=817
> 
>Bonnie King


Problematic 'yum' repository paths using --installroot option

2020-01-28 Thread Peter Willis
Hello,

I am attempting to build an exported file system for diskless nodes.

My distribution is  SL 7.7 and the command I am using is:

yum install @Base kernel dracut-network nfs-utils --installroot=/node_root 
--releasever=/

The repository problem manifests as a mistranslation of the  '$slreleasever'   
environment variable.
 
This makes all the FTP repositories used by YUM malformed. The value 
'$slreleasever'  appears to be inserted to the URLs rather than the proper 
value  '7x'. Hence the URLs read:

https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.scientificlinux.org_linux_scientific_-2524slreleasever_x86-5F64_os_Packages_=DwIFAg=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=_XPWQWNwUeu0ZFKH219isUWT6M2TPEo7FIuWwhY0wjM=-7jDic-Wp04OefvpIx3KLdPeLmRh7bYielU8ZO8vb2w=
 .

Rather than :

https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.scientificlinux.org_linux_scientific_7x_x86-5F64_os_Packages_=DwIFAg=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=_XPWQWNwUeu0ZFKH219isUWT6M2TPEo7FIuWwhY0wjM=UKg8rHPPmAJWsAUGj3Sw9Xgc2ybKBgIYkQkEmYV6HmU=
 


Has anyone encountered this issue with yum before?

Is there a solution? 

I have tried   'yum clean all'and friends.

Thanks for any help,

pwillis  


[gdal-dev] What Statistics Are Returned By GDAL.Dataset.GetRasterBand(...).GetStatistics(...) ?

2019-10-03 Thread Peter Willis
Hello,

I am using osgeo.gdal  python module  to load and get stats from image
bands.

While working with the, python, 'GetStatistics(...)' function I noticed that
the returned values seemed a bit off.
A snippet of code is shown below.

I compared the statistics returned in python by the function to band
statistics in ENVI image processing package for the same input file.
The results are shown below.

I am wondering if I am missing information regarding some key data filtering
feature in GDAL GetStatistics.
What statistics are GDAL returning?


#
## PYTHON CODE
#

src_ds = gdal.Open(image_path )
 
for bnd in range(1, src_ds.RasterCount+1 ):
item = {}
print("[ GETTING BAND ]: ", bnd)
srcband=None
srcband = src_ds.GetRasterBand(bnd)
if srcband is None:
continue

bnddata=srcband.ReadAsArray()

stats = srcband.GetStatistics( 0, 1 )
if stats is None:
continue
else:
print('put the values in a dictionary')
##GDAL stats##
#item['min'] = stats[0]
#item['max'] = stats[1]
#item['mean'] = stats[2]
#item['stdev'] = stats[3]

##numpy array stats##
#item['min'] = bnddata.min()
#item['max'] = bnddata.max()
#item['mean'] = bnddata.mean()
#item['stdev'] = bnddata.std()

###
#END PYTHON CODE
###


/*..

..*/

ENVI Statistics test returns these vales for each channel

ChannelMinMax   MeanStdev
Band 1  0.00  0.441641  0.135938  0.095007
Band 2  0.00  0.42  0.134556  0.096385
Band 3  0.00  0.512614  0.143145  0.108702
Band 4  0.00  0.574203  0.159381  0.128212
Band 5  0.00  1.286870  0.190917  0.159562
Band 6  0.00  1.368695  0.218191  0.191100
Band 7  0.00  1.208142  0.179098  0.158407

/*..

..*/

GDAL GetStatistics in Python 3.0 Returns

[
{'min': 0.10543035715818, 'max': 0.35029646754265, 'mean': 0.19733288299107,
'stdev': 0.03141020073449}, 
{'min': 0.087364979088306, 'max': 0.36481207609177, 'mean':
0.19531263034662, 'stdev': 0.040193729911813}, 
{'min': 0.066563792526722, 'max': 0.40562310814857, 'mean':
0.20773722116063, 'stdev': 0.060965329408999}, 
{'min': 0.057971999049187, 'max': 0.49459338188171, 'mean':
0.23126556845189, 'stdev': 0.084854378172706}, 
{'min': 0.052771702408791, 'max': 0.57949388027191, 'mean':
0.27695694029464, 'stdev': 0.11419731959916}, 
{'min': 0.033055797219276, 'max': 0.67045384645462, 'mean':
0.31645853188616, 'stdev': 0.14756091787453}
]


/*..

..*/

Using NumPy Stats Functions in Python 3.0
np.min()
np.max()
np.mean()
np.std()

Return the following values after converting the previous GDAL bands to
'NumPy.Array' and using the NumPy functions
(Values look similar to ENVI)
[
{'min': 0.0, 'max': 0.44164082, 'mean': 0.13593808, 'stdev': 0.09500719}, 
{'min': 0.0, 'max': 0.4158, 'mean': 0.13455594, 'stdev': 0.09638489}, 
{'min': 0.0, 'max': 0.51261353, 'mean': 0.14314489, 'stdev': 0.108702265}, 
{'min': 0.0, 'max': 0.57420313, 'mean': 0.15938139, 'stdev': 0.12821245}, 
{'min': 0.0, 'max': 1.2868699, 'mean': 0.19091722, 'stdev': 0.15956147}, 
{'min': 0.0, 'max': 1.3686954, 'mean': 0.21819061, 'stdev': 0.1911003}
]



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev

unsubscribe

2019-08-06 Thread Peter Willis
unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [MIT-Scheme-users] Installing 10.1.2 on Ubuntu 18.10

2018-11-03 Thread Peter Willis
Your system may already have scheme installed.

 sudo apt-get install scheme

instead of building. See if it’s already installed. You may be creating a 
version or GCC compiler version conflict during build with some lib on your 
system that is already there.

I’m only guessing since my kubuntu machine is offline right now.

P




> On Nov 3, 2018, at 2:30 AM, Sven Hartrumpf  wrote:
> 
> Hi.
> 
> I am trying to install MIT Scheme 10.1.2 from mit-scheme-10.1.2.tar.gz.
> (The pre-installed mit-scheme binary detected by configure is:
> Release 9.1.1 || Microcode 15.3 || Runtime 15.7 || SF 4.41 || LIAR/x86-64 
> 4.118 || Edwin 3.116 )
> The make step fails as follows:
> 
> ;  Generating SCode for file: "syntax-parser.scm" => "syntax-parser.bin"...
> ;Premature reference to reserved name: spar
> ;To continue, call RESTART with an option number:
> ; (RESTART 2) => Skip processing file 
> /var/tmp/mit-scheme-10.1.2/src/runtime/syntax-parser.scm
> ; (RESTART 1) => Return to read-eval-print level 1.
> 
> 2 error>
> End of input stream reached.make: *** [Makefile:190: compile-runtime] Error 1
> 
> Greetings
> Sven
> 
> ___
> MIT-Scheme-users mailing list
> MIT-Scheme-users@gnu.org
> https://lists.gnu.org/mailman/listinfo/mit-scheme-users


___
MIT-Scheme-users mailing list
MIT-Scheme-users@gnu.org
https://lists.gnu.org/mailman/listinfo/mit-scheme-users


Re: [MIT-Scheme-users] Help with Windows?

2018-06-03 Thread Peter Willis
64 bit Scheme with GPU would be interesting.

Peter 



> On Jun 1, 2018, at 11:17 PM, Chris Hanson  wrote:
> 
> I'm in the process of putting together a new release of MIT/GNU Scheme.
> 
> Unfortunately, while most of out contributors have moved to 64-bit hardware, 
> the Windows port is still 32-bit. And it hasn't worked very well in a long 
> time, mostly because of memory addressing issues in the 32-bit space.
> 
> I don't have the energy nor the interest to update the Windows port. I never 
> use Windows and as far as I know none of the other contributors does either.
> 
> So I'm looking for a volunteer who would like to do this work.
> 
> If no one steps up, then the next release will not be ported to Windows. Even 
> if we do get a volunteer, it still might not be in the release, since I'd 
> like to get the release out in the near future and I think the work to update 
> the Windows port will take a while.
> 
> So please, if you care about using MIT/GNU Scheme on Windows and are willing 
> to help, I'd really appreciate hearing from you.
> 
> Thanks,
> Chris
> ___
> MIT-Scheme-users mailing list
> MIT-Scheme-users@gnu.org
> https://lists.gnu.org/mailman/listinfo/mit-scheme-users


___
MIT-Scheme-users mailing list
MIT-Scheme-users@gnu.org
https://lists.gnu.org/mailman/listinfo/mit-scheme-users


RE: Problem with external file

2017-06-15 Thread Peter Willis

>That would be Debian Sarge. Debian backports has svn 1.4 ... too old, I'm 
>afraid.
>
>> I could try to build SVN using source, but I have my doubts.
>
>It's not inconceivable that you could build 1.6, but getting all the 
>dependencies lined up could be a pain on Sarge, indeed.
>
> I'll just keep moving the file manually.
>
>Or you can just make a copy of the file in the repository instead of a file 
>external? Or a normal "directory" external and a symlink to the right place?
>
>-- Brane

Unfortunately, I am working between MS OS and Linux. The soft link would only 
work on one of those.  MS uses a   '.lnk'  file for links and Linux uses an 
actual file system hallmark.

Stuck with the file I guess

Thanks again.

Peter



RE: Problem with external file

2017-06-15 Thread Peter Willis

>Really? Version 1.1.4? Because support for file externals was added in version 
>1.6.0, so I'm not really surprised your checkout fails. :)
>
>Perhaps you could consider upgrading your client?
>
>-- Brane

O.K. thanks that makes sense.
Out of date package of SVN.
The failed client is on an embedded device with Ubuntu 'Sarge' as the OS which 
is from the 'way-back machine'.

I could try to build SVN using source, but I have my doubts.
I'll just keep moving the file manually.

Thanks again for your help.

Peter






RE: Problem with external file

2017-06-15 Thread Peter Willis
>Can you show the output of 'svn --version' on that computer where it doesn't 
>work? It looks like your Subversion was compiled without support for the 
>svn:// protocol (which would be really strange).
>
>-- Brane

Yes, the output of that is:

svn --version
svn, version 1.1.4 (r13838)
   compiled May 13 2005, 06:44:42

Copyright (C) 2000-2004 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).

The following repository access (RA) modules are available:

* ra_dav : Module for accessing a repository via WebDAV (DeltaV) protocol.
  - handles 'http' schema
  - handles 'https' schema
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' schema
* ra_svn : Module for accessing a repository using the svn network protocol.
  - handles 'svn' schema





Problem with external file

2017-06-15 Thread Peter Willis
Hello,

 

I am having an issue with an external (pointing to a file) included in one
of my projects.

 

I have a single external in my repository that points to a file.

 

On my windows desktop I can check out the directory containing the external
just fine.

The code checks out and the external also checks out.

 

The same repository path checked out using command line on another computer
gives the following:

 

...BEGIN EXAMPLE

 

svn checkout --username myname --password mypass
"svn://svn.myoffice.net/repos/instrument_model/src/trunk/driver" .

A  README.txt

A  install_driver.sh

U .

 

Fetching external item into
'svn://svn.myoffice.net/repos/driver/trunk/driver2015.ko'

svn: Unrecognized URL scheme 'driver2015.ko'

 

END EXAMPLE

 

 

I have removed the external and redefined it to no avail.

 

SVN is not complaining about the revision control path (ie:
'svn://svn.myoffice.net/repos/driver/trunk/aslacq2015.ko' )

In fact, I can check out 'svn://svn.myoffice.net/repos/driver/trunk' into a
different working directory (on the problem machine) and I get that file.

 

SVN is actually complaining about the *FILE* path of the external in the
working directory as though it is a problem URL. (?!!)

 

Is there any reason why it should work on one computer and not another?

 

 

Thanks

 

 

 

 



Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-30 Thread Peter Willis
Colleagues,

An interesting discussion, the only question appears to be whether vCPE is
a suitable use case as the others do appear to be cloud use cases. Lots of
people assume CPE == small residential devices however CPE covers a broad
spectrum of appliances. Some of our customers' premises are data centres,
some are HQs, some are campuses, some are branches. For residential CPE we
use the Broadband Forum's CPE Wide Area Network management protocol
(TR-069), which may be easier to modify to handle virtual
machines/containers etc. than to get OpenStack to scale to millions of
nodes. However that still leaves us with the need to manage a stack of
servers in thousands of telephone exchanges, central offices or even
cell-sites, running multiple work loads in a distributed fault tolerant
manner.

Best Regards,
Peter.

On Tue, Aug 30, 2016 at 4:48 AM, joehuang  wrote:

> Hello, Jay,
>
> > The Telco vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases
>
> Do you mean Mobile Edge Computing for Mobile "Edge cloud"? If so, it's
> cloud. The introduction slides [1]  can help you to learn the use cases
> quickly, there are lots of material in ETSI website[2].
>
> [1] http://www.etsi.org/images/files/technologies/MEC_
> Introduction_slides__SDN_World_Congress_15-10-14.pdf
> [2] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-
> computing
>
> And when we talk about massively distributed cloud, vCPE is only one of
> the scenarios( now in argue - ing ), but we can't forget that there are
> other scenarios like  vCDN, vEPC, vIMS, MEC, IoT etc. Architecture level
> discussion is still necessary to see if current design and new proposals
> can fulfill the demands. If there are lots of proposals, it's good to
> compare the pros. and cons, and which scenarios the proposal work, which
> scenario the proposal can't work very well.
>
> ( Hope this reply in the thread :) )
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: 29 August 2016 18:48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination
> between actions/WGs
>
> On 08/27/2016 11:16 AM, HU, BIN wrote:
> > The challenge in OpenStack is how to enable the innovation built on top
> of OpenStack.
>
> No, that's not the challenge for OpenStack.
>
> That's like saying the challenge for gasoline is how to enable the
> innovation of a jet engine.
>
> > So telco use cases is not only the innovation built on top of OpenStack.
> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
> Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in
> OpenStack itself. If OpenStack don't address those basic requirements,
>
> That's the thing, Bin, those are *not* "basic" requirements. The Telco
> vCPE and Mobile "Edge cloud" (hint: not a cloud) use cases are asking
> for fundamental architectural and design changes to the foundational
> components of OpenStack. Instead of Nova being designed to manage a
> bunch of hardware in a relatively close location (i.e. a datacenter or
> multiple datacenters), vCPE is asking for Nova to transform itself into
> a micro-agent that can be run on an Apple Watch and do things in
> resource-constrained environments that it was never built to do.
>
> And, honestly, I have no idea what Gluon is trying to do. Ian sent me
> some information a while ago on it. I read it. I still have no idea what
> Gluon is trying to accomplish other than essentially bypassing Neutron
> entirely. That's not "innovation". That's subterfuge.
>
> > the innovation will never happen on top of OpenStack.
>
> Sure it will. AT and BT and other Telcos just need to write their own
> software that runs their proprietary vCPE software distribution
> mechanism, that's all. The OpenStack community shouldn't be relied upon
> to create software that isn't applicable to general cloud computing and
> cloud management platforms.
>
> > An example is - self-driving car is built on top of many technologies,
> such as sensor/camera, AI, maps, middleware etc. All innovations in each
> technology (sensor/camera, AI, map, etc.) bring together the innovation of
> self-driving car.
>
> Yes, indeed, but the people who created the self-driving car software
> didn't ask the people who created the cameras to write the software for
> them that does the self-driving.
>
> > WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built
> on top of OpenStack.
>
> You are defining "innovation" in an odd way, IMHO. "Innovation" for the
> vCPE use case sounds a whole lot like "rearchitect your entire software
> stack so that we don't have to write much code that runs on set-top boxes."
>
> Just being honest,
> -jay
>
> > Thanks
> > Bin
> > -Original Message-
> > From: Edward Leafe [mailto:e...@leafe.com]
> > Sent: Saturday, August 27, 2016 10:49 AM
> > To: OpenStack 

[openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-24 Thread Peter Willis
Colleagues,

I'd like to confirm that scalability and multi-site operations are key to
BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we will
have compute highly distributed around the network (from thousands to
millions of sites). BT would therefore support a Massively Distributed WG
and/or work on scalability and multi-site operations in the Architecture WG.

Best Regards,
Peter Willis.
BT Research
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[chandler-users] Synching CalDav From Chandler to Web

2015-02-27 Thread Peter Willis
Hello,

I have set up Chandler on Windows 7 64-bit to test as a CalDav client with
OwnCloud Calendar.

I have 'subscribed' to the calendar and get all the entries from the web
imported and synched into Chandler just fine.

Problems arise when I attempt to add an entry to the Chandler calendar
locally. Chandler is not syncing outbound to the web calendar.

Is there an additional plugin that should be applied/enabled to make
outbound sync of calendars possible?

Thanks 

___
chandler-users@osafoundation.org mailing list
unsubscribe here: http://lists.osafoundation.org/mailman/listinfo/chandler-users
Chandler wiki: http://chandlerproject.org/wikihome


Re: [gdal-dev] How Can I gdalwarp From One Image to a Larger Spatial Area While Leaving Missing Data Blank in the Destination?

2014-02-05 Thread Peter Willis
Hello,

Thank you Jukka, that worked.
I just shifted the coordinate of the origin to another location and
everything worked.

Peter

-Original Message-
From: gdal-dev-boun...@lists.osgeo.org
[mailto:gdal-dev-boun...@lists.osgeo.org] On Behalf Of Jukka Rahkonen
Sent: February-05-14 5:03 AM
To: gdal-dev@lists.osgeo.org
Subject: Re: [gdal-dev] How Can I gdalwarp From One Image to a Larger
Spatial Area While Leaving Missing Data Blank in the Destination?

Peter Willis pwillis at aslenv.com writes:

 
 Hello,
 
 Sorry about the title but it's a bit of a bear finding answers if the 
 headers don't show the actual topic.
 But I digress.
 
 My Issue:
 I have an input image that covers coordinates 0,0 to 450,350 UTM zone 
 13 North and the pixel size is 1 meter.
 
 I want to mosaic pixels 0,0 to 25,25 into a coverage of UTM 
 coordinates
 -25,-25 : 25, 25 at the output in 1 meter pixels.
 This should show a 25x25 square portion of the image in one corner of 
 the
 50x50 output image.
 
 'gdal_translate' will not do this because  -25,-25  are outside the 
 bounds of the original image.
 
 I have tried the following with gdalwarp:
 
 gdalwarp -overwrite -te -25 -25 25 25 -tr 1 1  ./input.tif 
 ./output.tif
 
 This does not work. I get the following error:
 
 ERROR 1: Unable to compute a transformation between pixel/line and 
 georeferenced coordinates for ./input.tif.
 There is no affine transformation and no GCPs.
 
 There is geography applied to the input file.
 
 What am I doing wrong ?

Hi,

This is funny. GDAL seems not to believe in images with origo at 0,0.
I tested with some dummy png file
gdal_translate -of gtiff -co tfw=yes -co profile=baseline test.png test.png
test.tif Input file size is 588, 321
0...10...20...30...40...50...60...70...80...90...100 - done.

It does not create a tfw file. Now this fails gdalwarp test.tif test2.tif
ERROR 1: Unable to compute a transformation between pixel/line and
georeferenced coordinates for test.tif.
There is no affine transformation and no GCPs.

I made tfw file test.tfw by hand as
1
0
0
-1
0
0

and then this is successful
gdalwarp test.tif test2.tif
Creating output file that is 588P x 321L.
Processing input file test.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.

and also what you want to do
gdalwarp -te -1000 -1000 1000 1000  test.tif testi3.tif Creating output file
that is 2000P x 2000L.
Processing input file test.tif.
0...10...20...30...40...50...60...70...80...90...100 - done.

So I feel there is a bug if corner coordinates are 0,0. Please verify with a
handwritten tfw file and images with origo at, let's say, 1,1.

-Jukka Rahkonen-



 Thanks for any help.
 
 Peter
 




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] How Can I gdalwarp From One Image to a Larger Spatial Area While Leaving Missing Data Blank in the Destination?

2014-02-04 Thread Peter Willis
Hello,

Sorry about the title but it's a bit of a bear finding answers if the
headers don't show the actual topic.
But I digress.

My Issue:
I have an input image that covers coordinates 0,0 to 450,350 UTM zone 13
North and the pixel size is 1 meter.

I want to mosaic pixels 0,0 to 25,25 into a coverage of UTM coordinates
-25,-25 : 25, 25 at the output in 1 meter pixels.
This should show a 25x25 square portion of the image in one corner of the
50x50 output image.

'gdal_translate' will not do this because  -25,-25  are outside the bounds
of the original image.

I have tried the following with gdalwarp:

gdalwarp -overwrite -te -25 -25 25 25 -tr 1 1  ./input.tif ./output.tif

This does not work. I get the following error:

ERROR 1: Unable to compute a transformation between pixel/line
and georeferenced coordinates for ./input.tif.
There is no affine transformation and no GCPs.

There is geography applied to the input file.

What am I doing wrong ?

Thanks for any help.


Peter


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gridengine users] How to fix qmon error -- There are not enough colors. Try qmon -cmap ??

2013-11-25 Thread Peter Willis
Looks like I have lesstif.
I will get openMotif source and try a recompile

P 


-Original Message-
From: Reuti [mailto:re...@staff.uni-marburg.de] 
Sent: November-25-13 6:58 AM
To: Peter Willis
Cc: users@gridengine.org
Subject: Re: [gridengine users] How to fix qmon error -- There are not
enough colors. Try qmon -cmap ??

Hi,

Am 20.11.2013 um 23:41 schrieb Peter Willis:

 I am running GE2011-11p1   on Ubuntu . 
 I am getting warnings about some colors  when I try to run qmon.
 
 These messages appear to originate from:
 
 trunk/source/3rdparty/qmon/Xmt310/Xmt/PixelCvt.c
 
 The qmon program also warns about not being able to load pixmap '21cal'.
 
 A message then appears , There are not enough colors.  Try qmon -cmap .

You compiled it on your own? Did you use OpenMotif oder LessTif? IIRC there
was an issue with the latter.

-- Reuti


 
 Running  'qmon -cmap'  generates the same results.
 
 What is missing from my distribution?
 
 Thanks,
 
 P
 
 
 
 
 ___
 users mailing list
 users@gridengine.org
 https://gridengine.org/mailman/listinfo/users


___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users


[gridengine users] How to fix qmon error -- There are not enough colors. Try qmon -cmap ??

2013-11-20 Thread Peter Willis
Hello,

I am running GE2011-11p1   on Ubuntu . 
I am getting warnings about some colors  when I try to run qmon.

These messages appear to originate from:

trunk/source/3rdparty/qmon/Xmt310/Xmt/PixelCvt.c

The qmon program also warns about not being able to load pixmap '21cal'.

A message then appears , There are not enough colors.  Try qmon -cmap .


Running  'qmon -cmap'  generates the same results.

What is missing from my distribution?

Thanks,

P




___
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users


[CMake] How Do I Make A Static Library from FORTRAN and CPP sources?

2013-03-01 Thread Peter Willis

Hello,

I would like to make a static library from FORTRAN sources (as opposed to
C/C++).

I have  in my CMakeLists.txt :

cmake_minimum_required(VERSION 2.8)
enable_language (Fortran)

PROJECT(MYFORTRANLIB)

get_filename_component (Fortran_COMPILER_NAME ${CMAKE_Fortran_COMPILER}
NAME)

SET(SOMEFORTRAN  libfuncs.F  statsfuncs.F )
SET(SOMECPP  one_lonely_function.cpp  )

SET(MYSOURCES ${SOMEFORTRAN} ${SOMECPP} )

ADD_LIBRARY(libmyfortran  STATIC  ${MYSOURCES} )





Is it possible to generate object files for both the CPP file and the
FORTRAN files
in the same build. or do I need to set up 2 directories with a library build
for each?

The CMakeLists.txt file above is not working yet.

Thanks

--

Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: 
http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake


Re: [mapserver-users] Mapserver 1.1.0 WCS ENVI Binary File with accompanying ENVI header

2013-02-05 Thread Peter Willis
Hello,

I think the file issue is resolved. 

It turns out the server was actually passing me three (3) separate
files in multipart MIME sequence. 
Your comment about that made me look closer at what was coming back via the
web browser.

Files come in this sequence:

1.) the WCS metadata
2.) the ENVI header (HDR) file
3.) the ENVI data (IMG) file

I was being a bit thick I guess.
I kept wondering why the web browser was trying to give me the same file 3
times 
on each GetCoverage request.

A second issue related to this is how to get mapserver to *Name*  the files
properly.  Each file sent by the mapserver is called  'wcs'  with no file
extension.
This is why I thought I was getting the same file 3 times.
  Is there a mapfile variable that can be set up to set the file names and
extensions
if ENVI multiband format is selected as the FORMAT for GetCoverage?

Thanks for pointing out the multi part MIME thing.
You probably saved me days of debugging and unproductive spinning. 

Peter


---


Peter,

Hopefully Stefan Meissl will answer.  If not, poke me and I'll try to figure
it out.

For some versions of WCS multi-part responses are supported as long as GDAL
knows the files are associated.  Either in a zip or multi-part mime.

I'm a bit rusty on this now though.

Best regards,
Frank


On Mon, Feb 4, 2013 at 3:55 PM, Peter Willis pwil...@aslenv.com wrote:
 Hello,

 I have set up a WCS server using mapserver.

 I can retrieve a Float32 ENVI  'img'  BSQ file  just fine and select 
 specific bands.

 However, I  DO NOT get an accompanying header (.HDR) file with the 
 binary data download.

 how do I request an HDR file for my data?
 Is there some special map file secret to this?

 I thought of making a 'pseudo' GDAL driver that wraps gdal_translate 
 and ZIP together but I'm hoping there is an easier fix.


 Thanks,

 Peter



 ___
 mapserver-users mailing list
 mapserver-users@lists.osgeo.org
 http://lists.osgeo.org/mailman/listinfo/mapserver-users



-- 
---+
---+--
I set the clouds in motion - turn up   | Frank Warmerdam,
warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Software Developer

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] Mapserver 1.1.0 WCS ENVI Binary File with accompanying ENVI header

2013-02-04 Thread Peter Willis
Hello,

I have set up a WCS server using mapserver.

I can retrieve a Float32 ENVI  'img'  BSQ file  just fine and select
specific bands.

However, I  DO NOT get an accompanying header (.HDR) file with the binary
data download.

how do I request an HDR file for my data?
Is there some special map file secret to this?

I thought of making a 'pseudo' GDAL driver that wraps
gdal_translate and ZIP together but I'm hoping there is an easier fix.


Thanks,

Peter



___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] Mapserver WCS and PCI Geomatica

2013-02-04 Thread Peter Willis
Hello,

I can request WCS files directly via a web browser and get a file, so
I know WCS is working on mapserver.

I get an obscure error attempting to retrieve WCS from
mapserver with PCI Geomatica.

The remote data wizard shows my test data and extents set after connecting
but I get a dialog:

 'No coverage offerings found on server [OK]' . 

Pressing [OK] opens another dialog: 

'Uncaught exception: An exception occurred. Reason: Unable to read the
coverage description [OK]'



Any thin glimmers of hope?

Thanks 

Peter

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[gdal-dev] Is it possible to ehance input channels separately with gdal_translate ?

2012-10-26 Thread Peter Willis
Hello,

I have some input data where the dynamic range and offset of each channel
are distinct.

This means that applying the same linear enhancement based on data range to
my selected RGB output
bands will not produce an optimal visual enhancement. 

-scale smin smax dmin dmax

only provides the same linear enhancement for all selected input channels
(bands).

Also the default causes gdal_translate to calculate the enhancement from the
data which 
is also the wrong thing to do owing to the relative changeability of albedo
from one scene to another
due to the possible presence or absence of cloud, snow, or sand, vs. water
or coal piles  (light vs. dark) etc..
and also the relative difference in dynamic range between channels.

QUESTION: 

Is there a way to enhance  my input channels separately in a single
gdal_translate call, or do I need
to make multiple passes?

Thanks

Peter



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
Hello,

I have a curious problem with 'gdalinfo'   (gdal version 1.9.0 ).

I have an input ENVI file with the following .HDR entry:

 data ignore value = -9.9900e+002

gdalinfo for the IMG file provides stats that exclude all values  -999.0 as
expected.

I then create a masked output file using 'gdal_rasterize' from that input
file once again using -999.0 as the mask value.
'gdal_rasterize' produces an ENVI format output with an accompanying .HDR
file.

The resulting 'gdal_rasterize' HDR  *does not* contain the previous 'data
ignore value'  entry. This is something that should 
be fixed in future versions of that utility. The work around is, of course,
to simply concatenate the 'data ignore value'
to the end of the HDR file. This is a quick step:

echomyfile.hdr
echo data ignore value = -9.9900+e2   myfile.hdr 
echomyfile.hdr

[NOTE: There are some other issues with ENVI HDR generation in GDAL that
should also be addressed but I won't go into those here.]

PROBLEM: 
gdalinfo does not parse the value for the concatenated header entry if it is
placed at the end of the HDR file.
In fact, the HDR file may be edited to move the entry to a location near the
top of the file whereby it remains unacknowledged by gdalinfo.

QUESTION:
Does gdalinfo expect the header entries to be in a specific order?

I have appended some parsing information to the end of this email.
You are probably aware of most, if not all, of this information.

Thanks,

Peter


--ENVI TEXT HEADERS---

The ENVI header file is not an 'ordered' data file. 
The entries may appear in any order. 
The generic format for entries is:

NAME = VALUE[EOL]


Equals  '='  normally appears on the same line as NAME without intervening
carriage return characters.
It is normal for '=' to be preceded by one space character although not
necessary.  
Equals  '=' is considered to be the primary delimiter between NAME  and
VALUE.

---NAMES---
'Name'  may be any string without carriage returns, and the NAME may
include space characters.

All names (and values) are trimmed of preceding and trailing white space
characters at parse time.
TAB'\t'  is not normally used as whitespace but is parsed as though it
were a space ' ' character.

---VALUES---

'VALUE'  may be a singular value (ie: a number or a string with no
whitespace ) or a 'list'  bounded by curly braces  { }. 

Examples of singular string values would be:

Interleave = BSQ
Sensor type = Unknown
Wavelength units = nanometers

where 'BSQ' , 'Unknown', and 'nanometers' are singular strings containing no
whitespace characters.

In the context of the VALUE strings containing necessary whitespace should
be encoded as a list. String values may contain carriage return characters.
The comma ','  is considered to 
be the default delimiter for list type data.

Ie: 

My List = {   some string data ,  another string data , yet another data
string , 
 etc data strings ad infinitum 
 }


Number formats associated with VALUE

Singular Integers:

My Integer = 9
My Integer = -99
My Integer = 0

Singular Reals:

My Real =  0.9
My Real =  .9
My Real =  -9.0+e8

Lists of numbers:

My Integer List = {
1 , 2 , 3,  4 , 5 , 6
}

My Real List = {
1.0 , -2.00e+2 , 3.0e-4,  4.0 , .5 , 6.1
}




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img),
 Band Math (Band Math (Band Math (1998_212 Landsat-4/5 TM :Surface
Temperature
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img),
 Band Math (Band Math (Band Math (1998_221 Landsat-4/5 TM :Surface
Temperature
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img),
 Band Math (Band Math (Band Math (1998_260 Landsat-4/5 TM :Surface
Temperature
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img),
 Band Math (Band Math (Band Math (1998_269 Landsat-4/5 TM :Surface
Temperature
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img),
 Band Math (Band Math (Band Math (1998_292 Landsat-4/5 TM :Surface
Temperature
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThe
rmalCube_96-98_float_cmsk.img)}
coordinate system string = {
 PROJCS[UTM_Zone_10N, GEOGCS[GCS_WGS_1984, DATUM[D_WGS_1984,
 SPHEROID[WGS_1984, 6378137.0, 298.257223563]], PRIMEM[Greenwich, 0.0],
 UNIT[Degree, 0.0174532925199433]], PROJECTION[Transverse_Mercator],
 PARAMETER[False_Easting, 50.0], PARAMETER[False_Northing, 0.0],
 PARAMETER[Central_Meridian, -123.0], PARAMETER[Scale_Factor, 0.9996],
 PARAMETER[Latitude_Of_Origin, 0.0], UNIT[Meter, 1.0]]}






-Original Message-
From: Even Rouault [mailto:even.roua...@mines-paris.org] 
Sent: October-19-12 11:53 AM
To: gdal-dev@lists.osgeo.org
Cc: Peter Willis
Subject: Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI
header

Le vendredi 19 octobre 2012 20:41:16, Peter Willis a écrit :
 Hello,
 
 I have a curious problem with 'gdalinfo'   (gdal version 1.9.0 ).
 
 I have an input ENVI file with the following .HDR entry:
 
  data ignore value = -9.9900e+002
 
 gdalinfo for the IMG file provides stats that exclude all values  
 -999.0 as expected.
 
 I then create a masked output file using 'gdal_rasterize' from that 
 input file once again using -999.0 as the mask value.
 'gdal_rasterize' produces an ENVI format output with an accompanying 
 .HDR file.

There's indeed no support currently in the ENVI driver to write the data
ignore value field.

 
 The resulting 'gdal_rasterize' HDR  *does not* contain the previous 
 'data ignore value'  entry. This is something that should be fixed in 
 future versions of that utility. The work around is, of course, to 
 simply concatenate the 'data ignore value'
 to the end of the HDR file. This is a quick step:
 
 echomyfile.hdr
 echo data ignore value = -9.9900+e2   myfile.hdr echo
 myfile.hdr

I've tried that and this worked for me. Perhaps you could send your
myfile.hdr ?



LSatThermalCube_96-98_float_cmsk_landmasked.hdr
Description: Binary data


LSatThermalCube_96-98_float_cmsk_landmasked_ll.hdr
Description: Binary data
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
Hello,

The one listed as 'failed'  actually fails for me as it is sent .

I was experimenting by moving the value near the top of the file.
It continues to fail whether the entry is at the bottom of the file
Or where it appears in the sample that I sent to you.

Could it be a line feeds issue?
I haven't tried running unix2dos/dos2unix against the HDR files.

Does the program mind if the EOL are \13\10  vs. \10  ?

Best Regards,

Peter


-Original Message-
From: Even Rouault [mailto:even.roua...@mines-paris.org] 
Sent: October-19-12 1:44 PM
To: gdal-dev@lists.osgeo.org
Cc: Peter Willis
Subject: Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

Le vendredi 19 octobre 2012 22:23:49, Peter Willis a écrit :
 Hello,
 
 I am providing the failed example followed by a working example.
 Let me know if there is any additional information you need.

With both headers, gdalinfo correctly reports the nodata value... But you 
didn't send the .hdr files you've generated since the data ignore value is 
not at their end.

 
 Thanks,
 
 Peter
 
 
 --The Failed Header Sample--
 ENVI
 description = {
 /web2-disk1/PR762/landsat/cubes/96_98/LSatThermalCube_96-98_float_cmsk
 _land
 m asked_ll.img}
 samples = 2178
 lines   = 1841
 bands   = 49
 header offset = 0
 file type = ENVI Standard
 data type = 4
 interleave = bsq
 byte order = 0
 map info = {Geographic Lat/Lon, 1, 1, -124.258818822249, 
 51.6708415299984, 0.000337173258151718, 0.000337173258151718,WGS-84} 
 wavelength units = Unknown data ignore value = -9.9900e+002 band 
 names = { Band 1, Band 2, Band 3, Band 4, Band 5, Band 6, Band 7, Band 
 8, Band 9, Band 10, Band 11, Band 12, Band 13, Band 14, Band 15, Band 
 16, Band 17, Band 18, Band 19, Band 20, Band 21, Band 22, Band 23, 
 Band 24, Band 25, Band 26, Band 27, Band 28, Band 29, Band 30, Band 
 31, Band 32, Band 33, Band 34, Band 35, Band 36, Band 37, Band 38, 
 Band 39, Band 40, Band 41, Band 42, Band 43, Band 44, Band 45, Band 
 46, Band 47, Band 48, Band 49}
 
 
 
 --The Working Header Sample--
 
 ENVI
 description = {
   Band Math Result, Expression = [(float(b2 gt 0.0) * b1) + (float(b2 
 eq
 0.0)*
   (-999.0))] B1:LSatThermalCube_96-98_float_cmsk.img
   B2:LSatThermalCube_96-98_float_cmsk_multi_channel_mask.img [Thu Oct 18
   10:12:05 2012]}
 samples = 1677
 lines   = 2281
 bands   = 49
 header offset = 0
 file type = ENVI Standard
 data type = 4
 interleave = bsq
 sensor type = Unknown
 byte order = 0
 map info = {UTM, 1.000, 1.000, 412940.000, 5724563.000, 
 3.00e+001, 3.00e+001, 10, North, WGS-84, units=Meters} 
 wavelength units = Unknown data ignore value = -9.9900e+002 pixel 
 size = {3.e+001, 3.e+001, units=Meters} band names = {  
 Band Math (Band Math (Band Math (1996_088 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_104 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_111 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_120 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_136 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_152 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_159 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_168 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_191 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_200 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_207 Landsat-4/5 TM :Surface 
 Temperature 
 Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):L
 SatTh
 e rmalCube_96-98_float_cmsk.img),
  Band Math (Band Math (Band Math (1996_216 Landsat-4/5 TM

Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
Hello,

When I run :

Gdalinfo -stats  FILENAME

One file (success) provides statistics calculated by excluding the no data 
values.
Ie: min=0.25 max=1234.0 mean=12.0

The other file (failed) provides statistics with -999 calculated into the stats.
Ie: min = -999.0 max=1234.0 mean=-993.0

I have looked at both binary files. Each of the two files contain the mask 
value.
ENVI loads both files and headers fine and reports the same no data value, for 
each file, in header info just fine.

I think you could probably model the problem by generating a much smaller file 
with multiple channels.

The steps happening are as follows:

1.) Original file (IN_IMG) is UTM projection ENVI format 49 bands BSQ, float32

2.) gdalwarp -of ENVI -t_srs +proj=latlong +datum=WGS84 ${IN_IMG} ${OUT_IMG}

3.) put the data ignore value back in the file:
echo   $OUT_HDR
echo data ignore value = -9.9900e+002  $OUT_HDR

4.) Call gdalinfo for number of bands in original file = $BANDS
5.) loop through bands seq 1 $BANDS = $BAND
  gdal_rasterize  -b $BAND -burn -999 -l ${LAYER_NAME}  ${MASK_SHP} 
${OUT_IMG}
6.) gdalinfo -stats ${OUT_IMG}

At step #6,  gdalinfo -stats reports data stats with the -999 values calculated 
into the min/max/mean/stdev.


Band 2 Block=2178x1 Type=Float32, ColorInterp=Undefined
  Description = Band 2
  Min=-999.000 Max=12.250
  Minimum=-999.000, Maximum=12.250, Mean=-946.817, StdDev=222.404
  NoData Value=-999
  Metadata:
STATISTICS_MINIMUM=-999
STATISTICS_MAXIMUM=12.25
STATISTICS_MEAN=-946.81678963104
STATISTICS_STDDEV=222.4036660914


If I perform gdalinfo -stats on the original UTM file the min/max/mean/stdev  
exclude -999 from calculations.

Band 2 Block=1677x1 Type=Float32, ColorInterp=Undefined
  Description = Band Math (Band Math (Band Math (1996_104 Landsat-4/5 TM 
:Surface Temperature 
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThermalCube_96-98_float_cmsk.img)
  Min=0.250 Max=12.250
  Minimum=0.250, Maximum=12.250, Mean=1.379, StdDev=0.995
  NoData Value=-999
  Metadata:
STATISTICS_MINIMUM=0.25
STATISTICS_MAXIMUM=12.25
STATISTICS_MEAN=1.3787828985307
STATISTICS_STDDEV=0.99545771926881


I hope this helps. 
Let me know if you need any more information.

Peter


-Original Message-
From: Even Rouault [mailto:even.roua...@mines-paris.org] 
Sent: October-19-12 2:08 PM
To: gdal-dev@lists.osgeo.org
Cc: Peter Willis
Subject: Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

Le vendredi 19 octobre 2012 22:57:13, Peter Willis a écrit :
 Hello,
 
 The one listed as 'failed'  actually fails for me as it is sent .

Just to be sure we are talking about the same thing, what do you call fail 
exactly ? Is it that gdalinfo doesn't report NoData Value=-999 for the bands ?

Just for reference, I've create a fake
LSatThermalCube_96-98_float_cmsk_landmasked_ll.img and  gdalinfo on it 
reports :

Driver: ENVI/ENVI .hdr Labelled
Files: LSatThermalCube_96-98_float_cmsk_landmasked_ll.img
   LSatThermalCube_96-98_float_cmsk_landmasked_ll.hdr
Size is 2178, 1841
Coordinate System is:
GEOGCS[WGS 84,
DATUM[WGS_1984,
SPHEROID[WGS 84,6378137,298.257223563,
AUTHORITY[EPSG,7030]],
TOWGS84[0,0,0,0,0,0,0],
AUTHORITY[EPSG,6326]],
PRIMEM[Greenwich,0,
AUTHORITY[EPSG,8901]],
UNIT[degree,0.0174532925199433,
AUTHORITY[EPSG,9108]],
AUTHORITY[EPSG,4326]]
Origin = (-124.258818822248998,51.670841529998398)
Pixel Size = (0.000337173258152,-0.000337173258152)
Metadata:
[ cut ]
Image Structure Metadata:
  INTERLEAVE=BAND
Corner Coordinates:
Upper Left  (-124.2588188,  51.6708415) (124d15'31.75W, 51d40'15.03N) Lower 
Left  (-124.2588188,  51.0501056) (124d15'31.75W, 51d 3' 0.38N) Upper Right 
(-123.5244555,  51.6708415) (123d31'28.04W, 51d40'15.03N) Lower Right 
(-123.5244555,  51.0501056) (123d31'28.04W, 51d 3' 0.38N)
Center  (-123.8916371,  51.3604735) (123d53'29.89W, 51d21'37.70N)
Band 1 Block=2178x1 Type=Float32, ColorInterp=Undefined
  Description = Band 1
  NoData Value=-999
Band 2 Block=2178x1 Type=Float32, ColorInterp=Undefined
  Description = Band 2
  NoData Value=-999
[... cut ... ]
Band 49 Block=2178x1 Type=Float32, ColorInterp=Undefined
  Description = Band 49
  NoData Value=-999


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
Hello,

I am seeing that you are correct that the 'no data value'  is being read in 
each case.
Perhaps my interpretation of the problem appears incorrect.

Look, however at the 'Metadata:' statistics.
Both files have assigned -999 to 'No Data Value'  but'STATISTICS_MINIMUM' 
for one file shows -999 (??).
This means the values are not being ignored for the purpose of calculating 
statistics.

--THE BAD FILE--

Band 2 Block=2178x1 Type=Float32, ColorInterp=Undefined
  Description = Band 2
  Min=-999.000 Max=12.250
  Minimum=-999.000, Maximum=12.250, Mean=-946.817, StdDev=222.404
  NoData Value=-999
  Metadata:
STATISTICS_MINIMUM=-999
STATISTICS_MAXIMUM=12.25
STATISTICS_MEAN=-946.81678963104
STATISTICS_STDDEV=222.4036660914


---THE GOOD FILE---

Band 2 Block=1677x1 Type=Float32, ColorInterp=Undefined
  Description = Band Math (Band Math (Band Math (1996_104 Landsat-4/5 TM 
:Surface Temperature 
Sensor:6:LSatThermalCube_96-98.img):LSatThermalCube_96-98_float.img):LSatThermalCube_96-98_float_cmsk.img)
  Min=0.250 Max=12.250
  Minimum=0.250, Maximum=12.250, Mean=1.379, StdDev=0.995
  NoData Value=-999
  Metadata:
STATISTICS_MINIMUM=0.25
STATISTICS_MAXIMUM=12.25
STATISTICS_MEAN=1.3787828985307
STATISTICS_STDDEV=0.99545771926881

Note the difference in stats calculations.

Peter



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] gdalinfo -stats Misses invalid data value in ENVI header

2012-10-19 Thread Peter Willis
Hello,

Good news. 

Removing the '.aux.xml ' file and again running   gdalinfo -statsindeed 
fixes the problem.

I now get the expected statistics values.

Thank you for your help.

Best Regards,

Peter



Le vendredi 19 octobre 2012 23:57:01, Peter Willis a écrit :
 Hello,
 
 At step #6,  gdalinfo -stats reports data stats with the -999 values 
 calculated into the min/max/mean/stdev.
 
 
 Band 2 Block=2178x1 Type=Float32, ColorInterp=Undefined
   Description = Band 2
   Min=-999.000 Max=12.250
   Minimum=-999.000, Maximum=12.250, Mean=-946.817, StdDev=222.404
   NoData Value=-999
   Metadata:
 STATISTICS_MINIMUM=-999
 STATISTICS_MAXIMUM=12.25
 STATISTICS_MEAN=-946.81678963104
 STATISTICS_STDDEV=222.4036660914
 

Ok, indeed that's not expected.

Well, perhaps you could first check if there is no .aux.xml file that would 
contain bad values. And if so, delete it before running again gdalinfo -stats. 

Other than that, I've no more idea without having access to the .img file 
itself.


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI SHP format Vector [SEC=UNCLASSIFIED]

2012-10-18 Thread Peter Willis
Hello,

 

Thanks, that worked.

Looks like I missed the in-place band number selection flag ( -b ).

 

Does anyone know if I can direct output to another file, or do

I always need to copy the original file and 'burn' the values into an
existing duplicate file.

 

I guess it's just as easy either way.

 

Thanks again,

 

Peter

 

 

 

From: Pinner, Luke [mailto:luke.pin...@environment.gov.au] 
Sent: October-17-12 4:11 PM
To: Peter Willis; gdal-dev@lists.osgeo.org
Subject: RE: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI
SHP format Vector [SEC=UNCLASSIFIED]

 

Perhaps something like

 

for i in {1..49}; do gdal_rasterize -burn -999 -b $i mask.shp
49bandenvi.dat; done

 

Luke

 

From: gdal-dev-boun...@lists.osgeo.org
[mailto:gdal-dev-boun...@lists.osgeo.org] On Behalf Of Peter Willis
Sent: Thursday, 18 October 2012 9:43 AM
To: gdal-dev@lists.osgeo.org
Subject: Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI
SHP format Vector

 

Hello,

 

On second thoughts, this is not really what I want.

 

I have a SHP format polygon vector file  already.

I want to use that existing vector file to mask the ENVI format BSQ file
through all (49) channels.

(ie:  'mask' meaning set any values inside vector polygons to a specific
value within the output ENVI BSQ file )

 

Perhaps I'm missing something.

 

Peter 

 

From: fwarmer...@gmail.com [mailto:fwarmer...@gmail.com] On Behalf Of Frank
Warmerdam
Sent: October-17-12 3:22 PM
To: Peter Willis
Cc: gdal-dev@lists.osgeo.org
Subject: Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI
SHP format Vector

 

Peter,

 

I'm guessing you are using the gdal_polygonize.py script

for masking.  Is that right?  I am not aware of any particular

reason this shouldn't work for any update in place format

(ie. shows rw+ in the gdalinfo --formats list).  ENVI is in

this list. 

 

BTW, before we do a lot of work to investigate this you might

want to see if the problem persists with GDAL 1.9.  GDAL 1.6.3

is getting pretty antique.

 

Best regards,

Frank

 

On Wed, Oct 17, 2012 at 2:59 PM, Peter Willis pwil...@aslenv.com wrote:

Hello,

Is it possible to use ESRI SHP polygon file to mask
ENVI  BSQ img format files with more than 1 band?

I can mask a GTiff file using the SHP but ENVI file does not
appear to work.  Gdal version is  1.6.3 .

The documentation appears unclear for any raster that is not
specifically GTiff.

Thanks,

Peter

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev





 

-- 
---+
--
I set the clouds in motion - turn up   | Frank Warmerdam,
warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Software Developer

If you have received this transmission in error please notify us immediately
by return e-mail and delete all copies. If this e-mail or any attachments
have been sent to you in error, that error does not constitute waiver of any
confidentiality, privilege or copyright in respect of information in the
e-mail or attachments. Please consider the environment before printing this
email.

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI SHP format Vector

2012-10-17 Thread Peter Willis
Hello,

Is it possible to use ESRI SHP polygon file to mask
ENVI  BSQ img format files with more than 1 band?

I can mask a GTiff file using the SHP but ENVI file does not 
appear to work.  Gdal version is  1.6.3 .

The documentation appears unclear for any raster that is not 
specifically GTiff.

Thanks,

Peter

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI SHP format Vector

2012-10-17 Thread Peter Willis
Hello,

 

I was using gdal_rasterize  directly rather than  gdal_polygonize.py .

.No  gdal_poygonize.py found for my revision..

 

I was forgetful of -formats via gdalinfo.  This shows rw+ for ENVI format
which is what I need.

Upon looking I find that I have already installed revision 1.9 in another
/usr/local directory.

Perhaps I should use the more recent revision for attempting this.

 

Thanks for the pointers,

 

Peter

 

 

 

From: fwarmer...@gmail.com [mailto:fwarmer...@gmail.com] On Behalf Of Frank
Warmerdam
Sent: October-17-12 3:22 PM
To: Peter Willis
Cc: gdal-dev@lists.osgeo.org
Subject: Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI
SHP format Vector

 

Peter,

 

I'm guessing you are using the gdal_polygonize.py script

for masking.  Is that right?  I am not aware of any particular

reason this shouldn't work for any update in place format

(ie. shows rw+ in the gdalinfo --formats list).  ENVI is in

this list. 

 

BTW, before we do a lot of work to investigate this you might

want to see if the problem persists with GDAL 1.9.  GDAL 1.6.3

is getting pretty antique.

 

Best regards,

Frank

 

On Wed, Oct 17, 2012 at 2:59 PM, Peter Willis pwil...@aslenv.com wrote:

Hello,

Is it possible to use ESRI SHP polygon file to mask
ENVI  BSQ img format files with more than 1 band?

I can mask a GTiff file using the SHP but ENVI file does not
appear to work.  Gdal version is  1.6.3 .

The documentation appears unclear for any raster that is not
specifically GTiff.

Thanks,

Peter

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev





 

-- 
---+
--
I set the clouds in motion - turn up   | Frank Warmerdam,
warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Software Developer

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI SHP format Vector

2012-10-17 Thread Peter Willis
Hello,

 

On second thoughts, this is not really what I want.

 

I have a SHP format polygon vector file  already.

I want to use that existing vector file to mask the ENVI format BSQ file
through all (49) channels.

(ie:  'mask' meaning set any values inside vector polygons to a specific
value within the output ENVI BSQ file )

 

Perhaps I'm missing something.

 

Peter 

 

From: fwarmer...@gmail.com [mailto:fwarmer...@gmail.com] On Behalf Of Frank
Warmerdam
Sent: October-17-12 3:22 PM
To: Peter Willis
Cc: gdal-dev@lists.osgeo.org
Subject: Re: [gdal-dev] Using gdal_rasterize to mask ENVI file Using ESRI
SHP format Vector

 

Peter,

 

I'm guessing you are using the gdal_polygonize.py script

for masking.  Is that right?  I am not aware of any particular

reason this shouldn't work for any update in place format

(ie. shows rw+ in the gdalinfo --formats list).  ENVI is in

this list. 

 

BTW, before we do a lot of work to investigate this you might

want to see if the problem persists with GDAL 1.9.  GDAL 1.6.3

is getting pretty antique.

 

Best regards,

Frank

 

On Wed, Oct 17, 2012 at 2:59 PM, Peter Willis pwil...@aslenv.com wrote:

Hello,

Is it possible to use ESRI SHP polygon file to mask
ENVI  BSQ img format files with more than 1 band?

I can mask a GTiff file using the SHP but ENVI file does not
appear to work.  Gdal version is  1.6.3 .

The documentation appears unclear for any raster that is not
specifically GTiff.

Thanks,

Peter

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev





 

-- 
---+
--
I set the clouds in motion - turn up   | Frank Warmerdam,
warmer...@pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush| Geospatial Software Developer

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev

[Savannah-register-public] [task #10860] Submission of GDL GNU Data Language User Help

2011-01-06 Thread Peter Willis

URL:
  http://savannah.gnu.org/task/?10860

 Summary: Submission of GDL GNU Data Language User Help
 Project: Savannah Administration
Submitted by: pwillis
Submitted on: Fri 07 Jan 2011 03:01:12 AM GMT
 Should Start On: Fri 07 Jan 2011 12:00:00 AM GMT
   Should be Finished on: Mon 17 Jan 2011 12:00:00 AM GMT
Category: Project Approval
Priority: 5 - Normal
  Status: None
 Privacy: Public
Percent Complete: 0%
 Assigned to: None
 Open/Closed: Open
 Discussion Lock: Any
  Effort: 0.00

___

Details:

A new project has been registered at Savannah 
This project account will remain inactive until a site admin approves or
discards the registration.


= Registration Administration =

While this item will be useful to track the registration process, *approving
or discarding the registration must be done using the specific Group
Administration
https://savannah.gnu.org/siteadmin/groupedit.php?group_id=10705 page*,
accessible only to site administrators, effectively *logged as site
administrators* (superuser):

* Group Administration
https://savannah.gnu.org/siteadmin/groupedit.php?group_id=10705


= Registration Details =

* Name: *GDL GNU Data Language User Help*
* System Name:  *gdl-help*
* Type: non-GNU software  documentation
* License: GNU General Public License v2 or later



 Description: 
Help Pages and end user wiki for GDL GNU Data Language


 Tarball URL: 
http://downloads.sourceforge.net/project/gnudatalanguage/gdl/0.9/gdl-0.9.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fgnudatalanguage%2Fts=1294369109use_mirror=superb-sea2






___

Reply to this item at:

  http://savannah.gnu.org/task/?10860

___
  Message sent via/by Savannah
  http://savannah.gnu.org/




Re: Installing GTK Binary Packages Into MINGW on MS Windows Using Wascana and Eclipse

2010-08-19 Thread Peter Willis

Øystein Schønning-Johansen wrote:



On Wed, Aug 18, 2010 at 8:07 PM, Peter Willis pwil...@aslenv.com 
mailto:pwil...@aslenv.com wrote:


Hello,

I would like to use mingw to port and compile a simple GTK application
under MS Windows.

The download page for windows located at:

http://www.gtk.org/download-windows.html

recommends the mingw tool chain and contains tables of relevant
packages as well as dependencies.

I have downloaded the various required packages and dependencies
marked 'Dev' on that page.

What is unclear from any installation instructions I
have been able to find is where and how to install these
packages into mingw.

Do I simply decompress the archives in the mingw directory
hierarchy so that the files end up in the respective directories there?

*or*

Do I need to make separate hierarchies for each of the zip files
and point GCC at the 'lib' and 'header' directories using '-L -l'
and '-I -i' flags respectively?


I've used GTK and glib with mingw for many years now, and I've always 
put mingw in c:\mingw and all the gtk stuff in c:\gtk.


This works perfectly, and I usually also add some simple unix-ish tools 
such that I can mimic a unix system at a dos prompt.


To make all include and linking simple I usually have a makefile that 
contains these lines:


INCLUDE = -I/C/GTK/include $(shell pkg-config --cflags gtk+-2.0)
LIBS = $(shell pkg-config --libs gtk+-2.0)

... or somthing similar depending on what I need.

I would not recommend to put GTK and MinGW in the same directories!

-Øystein



Thanks,

That's perfectly what I needed to know.

I am using the 'wascana' build of eclipse IDE, which has
mingw included in the package.

Do you have any opinions regarding using this?
My first sense was to avoid it, but I prefer the IDE
and it saves me some work.



One other question I have is regarding packaging of
the final application along with the GTK runtime
for windows.

How problematic are the version differences
in runtime DLLs? Will making an installer that includes
GTK runtime of a different version break GTK software that
people have previously installed? Does it matter?

I don't want to break everyone's Gimp.


Peter
___
gtk-list mailing list
gtk-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-list


Installing GTK Binary Packages Into MINGW on MS Windows Using Wascana and Eclipse

2010-08-18 Thread Peter Willis

Hello,

I would like to use mingw to port and compile a simple GTK application
under MS Windows.

The download page for windows located at:

http://www.gtk.org/download-windows.html

recommends the mingw tool chain and contains tables of relevant packages 
as well as dependencies.


I have downloaded the various required packages and dependencies
marked 'Dev' on that page.

What is unclear from any installation instructions I
have been able to find is where and how to install these
packages into mingw.

Do I simply decompress the archives in the mingw directory
hierarchy so that the files end up in the respective directories there?

*or*

Do I need to make separate hierarchies for each of the zip files
and point GCC at the 'lib' and 'header' directories using '-L -l'
and '-I -i' flags respectively?

Thanks

Peter

___
gtk-list mailing list
gtk-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-list


Re: [postgis-users] WKT expected, EWKT provided

2010-05-12 Thread Peter Willis

strk wrote:

On Tue, May 11, 2010 at 01:56:21PM -0700, Peter Willis wrote:


I am entering 2D polygons so from where would postgis be warning
about EWKT entry? 


Are you embedding a SRID value inside your WKT ? That's also an extension.

--strk; 


  ()   Free GIS  Flash consultant/developer
  /\   http://strk.keybit.net/services.html






Ah!, that's it. The WKT text has the SRID prefixed to the polygon text.

Thanks

P
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


[mapserver-users] Mathematical Scaling of Floating Point Data in Mapserver Map File

2010-05-12 Thread Peter Willis


Hello,

Is it possible to apply nonlinear scaling to classify
floating point tiff rasters.

All examples that I have seen appear to assume a linear
dataset.

If I wish to scale the data by the base 10 log
of the data and then scale the Red, Green and Blue values
of the classification, can I do it in the map file?

What are the available functions that may be used
in the mapfile 'PROCESSING' and 'EXPRESSION' items?

Thanks,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mathematical Scaling of Floating Point Data in Mapserver Map File

2010-05-12 Thread Peter Willis

Thanks Frank,

That helps somewhat.

It's going to be a bit of a bear making a 256 color lookup
that way. I guess I can script the CLASS lookup sections of
the map file(s).

Peter


Frank Warmerdam wrote:

Peter Willis wrote:


Hello,

Is it possible to apply nonlinear scaling to classify
floating point tiff rasters.

All examples that I have seen appear to assume a linear
dataset.

If I wish to scale the data by the base 10 log
of the data and then scale the Red, Green and Blue values
of the classification, can I do it in the map file?

What are the available functions that may be used
in the mapfile 'PROCESSING' and 'EXPRESSION' items?


Peter,

The short answer is that you cannot, in general, do non-linear
scaling.  Depending on the dynamic range of your data, you could
use classes to do something approximating non-linear scaling but you
need to be aware that in the background there is still a linear scaling
being done before the classification is applied to the scaled buckets.
But you can request as many as 64K buckets which would still give you
pretty good control over something like four orders of magnitude.

You might use something like:

  PROCESSING SCALE=0,10
  PROCESSING SCALE_BUCKETS=64000

  CLASS
EXPRESSION ([pixel]  2.5)
COLOR 255 0 0
  END
  CLASS
EXPRESSION ([pixel]  10)
COLOR 235 20 20
  END
  CLASS
EXPRESSION ([pixel]  25)
COLOR 215 40 40
  END
  CLASS
EXPRESSION ([pixel]  100)
COLOR 195 60 60
  END
  CLASS
EXPRESSION ([pixel]  250)
COLOR 175 80 80
  END
  CLASS
EXPRESSION ([pixel]  1000)
COLOR 165 100 100
  END
  CLASS
EXPRESSION ([pixel]  2500)
COLOR 145 120 120
  END
  CLASS
EXPRESSION ([pixel]  1)
COLOR 125 140 140
  END
  CLASS
EXPRESSION ([pixel]  25000)
COLOR 105 160 160
  END
  CLASS
EXPRESSION ([pixel] = 10)
COLOR 85 180 180
  END

Note that because the values between 0 and 10 are classified
into only 64000 buckets you don't really have very good precision
at the bottom end of the range.  So even though we specify
[pixel]  2.5, each bucket is about 1.5625 wide so the first two
buckets will get put into the first class, including values
between 2.5 and 3.125.  This imprecision will become insignificant
to a log scale as you move up to larger values.

Good luck,



___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mathematical Scaling of Floating Point Data in Mapserver Map File

2010-05-12 Thread Peter Willis

Frank Warmerdam wrote:

There isn't anything magical about 256.



256 or better raster scales sure are pretty though :)

Seriously though, the raster data is floating point log data
with relatively random spatial distribution. Short of making
two images, one log float and one anti-log float, it would be
nice to be able to scale on the fly without duplicating the
original data.

Duplication of rasters is O.K. if you have one or two, but
once you near the 30k-40k files mark the disk space becomes
a tad expensive.

Would it be out of line to suggest some scaling functions
like the generic math functions:

log
log10
sin
asin
sinh
cos
acos
cosh
tan
atan
tanh
abs
sqrt
mod

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[postgis-users] WKT expected, EWKT provided

2010-05-11 Thread Peter Willis

Hello,


I am getting the following error during database queries:

WARNING:OGC WKT expected, EWKT provided - use GeomFromEWKT() for this

I found some references to this using Google, so it appears to be a
known item.

I have tried wrapping all of my text geometries in 'GeomFromEWKT()'
to no avail.

How many dimensions is postgis WKT now expecting for 'POLYGON'?

From where is this warning emitted? (ie: part of what library )


Thanks,

P

Ps.
I thought to search your mailing list archives but they don't appear
to be search-able. Is there any way to search the mail list archives
other than by downloading every compressed file and using
zcat and fgrep ?
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] WKT expected, EWKT provided

2010-05-11 Thread Peter Willis

Mike Toews wrote:

On 11 May 2010 12:04, Peter Willis pwil...@aslenv.com wrote:

How many dimensions is postgis WKT now expecting for 'POLYGON'?


OGC WKT has only 2 dimensions while EWKT has up to 4.


Ps.
I thought to search your mailing list archives but they don't appear
to be search-able. Is there any way to search the mail list archives
other than by downloading every compressed file and using
zcat and fgrep ?


You'd think it should be easy to do:
http://postgis.refractions.net/pipermail/postgis-users/2008-December/022098.html

-Mike






Hi Mike,

Thanks for the location of the mail list search.
(I must be blind sigh!)

I am entering 2D polygons so from where would postgis be warning
about EWKT entry? The error must be internal to the postgresql
library module. Is this a versioning problem between sub-libraries
of postgis?

P


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [mapserver-users] Continuing problem with WCS and ENVI Multi-band files

2010-02-09 Thread Peter Willis

Hello,

This has actually been resolved. I just haven't got back here to
give an update until now.

The proper mapserver use case for ENVI type rasters and WCS
will be posted at mapserver.org by Frank Warmerdam once I
get my 'stuff' together.

There are some small sample ENVI test files available at:

http://filebin.ca/cpaux/ENVI_test_files.ZIP

NOTE: the files listed as ...int32...
are actually  ...int16... and the file name
is a typo.

These files (content) are not scientific data files
but 6 band binary files with headers.
The image data is a text burn in of the
band number in each band on a zero background.

The problem I was having with my WCS dataset
LAYER definition seemed to clear up with some
minor editing.

There is a sample map file for UMN mapserver
and ENVI format WCS located at:

http://filebin.ca/upkmxs/mapfile.txt

The geographic pixel size in the map file is incorrect.
It should be 2.8125 degrees not 0.08333. The file is only 128x64 spatially.

the band selection for GetCoverage URL is bands=number

ie:(see end of URL line)

http://somewcsserver.net/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=BOGUStestDataCRS=EPSG:4326BBOX=-180,-90,180,90WIDTH=128HEIGHT=64FORMAT=GEOTIFF_FLOATbands=6

Hope that's useful.

If you need NDVI that's a whole other story. :)

Peter


Lime, Steve D (DNR) wrote:

Any chance of getting a test dataset?

-Original Message-
From: mapserver-users-boun...@lists.osgeo.org 
[mailto:mapserver-users-boun...@lists.osgeo.org] On Behalf Of Peter Willis
Sent: Monday, February 08, 2010 12:37 PM
To: mapserver-users@lists.osgeo.org
Subject: [mapserver-users] Continuing problem with WCS and ENVI Multi-band files

Hello,

I continue to have a problem selecting
a single band from a multi-band (563 bands)
ENVI format BSQ file.

Using the following URL mapserver gives me a FLOAT32 GTiff
file but it is always filled with zeros.

http://wcs.foo.com/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=NDVICRS=EPSG:4326BBOX=-135,55,-121,46WIDTH=256HEIGHT=256FORMAT=GEOTIFF_FLOATBand=122

Is there any documentation that clearly
shows how WCS can be set up to do this?

My current LAYER definition is as follows:

LAYER
   NAME VegetationIndex
   STATUS OFF
   DEBUG ON
   TYPE RASTER
   METADATA
 wcs_label   Data/NDVI
 wcs_rangeset_name   'bands'
 wcs_rangeset_label  NDVI
 ows_extent '-135 55 -121 46'
 wcs_resolution '0.08332 -0.083332'
 ows_srs 'EPSG:4326'
 wcs_srs 'EPSG:4326'
 wcs_formats 'GEOTIFF_FLOAT,GEOTIFF_INT16'
 wcs_nativeformat 'ENVI'

   END
   DATA /NDVI/ndvi.img
   PROJECTION
 init=epsg:4623
   END
   DUMP TRUE
END

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] Continuing problem with WCS and ENVI Multi-band files

2010-02-08 Thread Peter Willis

Hello,

I continue to have a problem selecting
a single band from a multi-band (563 bands)
ENVI format BSQ file.

Using the following URL mapserver gives me a FLOAT32 GTiff
file but it is always filled with zeros.

http://wcs.foo.com/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=NDVICRS=EPSG:4326BBOX=-135,55,-121,46WIDTH=256HEIGHT=256FORMAT=GEOTIFF_FLOATBand=122

Is there any documentation that clearly
shows how WCS can be set up to do this?

My current LAYER definition is as follows:

LAYER
  NAME VegetationIndex
  STATUS OFF
  DEBUG ON
  TYPE RASTER
  METADATA
wcs_label   Data/NDVI
wcs_rangeset_name   'bands'
wcs_rangeset_label  NDVI
ows_extent '-135 55 -121 46'
wcs_resolution '0.08332 -0.083332'
ows_srs 'EPSG:4326'
wcs_srs 'EPSG:4326'
wcs_formats 'GEOTIFF_FLOAT,GEOTIFF_INT16'
wcs_nativeformat 'ENVI'

  END
  DATA /NDVI/ndvi.img
  PROJECTION
init=epsg:4623
  END
  DUMP TRUE
END

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] How to serve individual RAW binary Bands fromWCS as floating point geotiff

2010-01-29 Thread Peter Willis

Hello,

Interesting... The documentation indicates:

RangeSubset=contents:nearest[Band[120]]

I tried what you suggest but continue to have the same problem.
The mapserver *did not* complain about 'RESAMPLE=BILINEARBand=120'
so I guess we can assume that those parameters work.

Peter


Rahkonen Jukka wrote:

Hi,

I have never used this myself: RangeSubset=contents:nearest[Band[120]]
Is that WCS 1.0.0 parameter?  For selecting bands from WCS 1.0.0 service I have been using something like 
RESAMPLE=BILINEARBand=4


-Jukka Rahkonen-


-Alkuperäinen viesti-
Lähettäjä: mapserver-users-boun...@lists.osgeo.org puolesta: Peter Willis
Lähetetty: pe 29.1.2010 0:26
Vastaanottaja: mapserver-users@lists.osgeo.org
Aihe: Re: [mapserver-users] How to serve individual RAW binary Bands fromWCS as 
floating point geotiff
 
Hello,


The definition of GEOTIFF_FLOAT is as follows:

OUTPUTFORMAT
   NAME GEOTIFF_FLOAT
   DRIVER GDAL/GTiff
   MIMETYPE application/octet-stream
   IMAGEMODE FLOAT32
END

I Added the MIMETYPE as shown to ensure that web browsers
would ask to save the file rather than just open it in an
image viewer from a temp file. (MS Windows...)

That part works fine.


Mostly I am concerned with extracting a specific 'band'
from the multi-band file. The server serves floating point
geotiff just fine.

Peter


Rahkonen Jukka wrote:

Hi,

How have you defined the OUTPUTFORMAT for GEOTIFF_FLOAT?

-Jukka Rahkonen-


Peter Willis wrote:

 

Hello,
I have been setting up a WCS map file for mapserver.
PROBLEM:
Problems arise when rasters are defined
as follows:

LAYER
   NAME SomeDataChannel120
   STATUS OFF
   DEBUG ON
   TYPE RASTER ### required
   PROCESSING BANDS=120
   METADATA
 wcs_label   Data/stuff
 wcs_rangeset_name   'bands'
 wcs_rangeset_label  Stuff thats mapped
 ows_extent '-135 55 -121 46'
 wcs_resolution '0.08332 -0.083332'
 ows_srs 'EPSG:4326'
 wcs_srs 'EPSG:4326'
 wcs_formats 'GEOTIFF_FLOAT,GEOTIFF_INT16'
 wcs_nativeformat 'ENVI'
#wcs_bandcount '563'
 wcs_rangeset_axes 'bands'
   END
   DATA /data/stuff.img
   PROJECTION
 init=epsg:4623
   END
   DUMP TRUE ### required
END


I have  'wcs_bandcount'  commented out because mapserver
causes an internal error in the web server if I define the
band count.

I get floating point geotiff served with the following client request:
http://sparky.com/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=SomeDataChannel120CRS=EPSG:4326BBOX=-135,55,-121,46WIDTH=432HEIGHT=216FORMAT=GEOTIFF_FLOATRangeSubset=contents:nearest[Band[120]]

..however, I always only get the first band from the img file


QUESTIONS:

Is there a problem with this setup?

Do I need to define each band as a layer in the mapfile
or will mapserver WCS allow the client to request a single
channel?

Thanks for any enlightenment,

Peter

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users






___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] How to serve individual RAW binary Bands fromWCS as floating point geotiff

2010-01-29 Thread Peter Willis

Hello,

Thanks. I have the following line in the map file LAYER declaration:

PROCESSING BANDS=120

In any case, the current configuration gives me a floating point geotiff
that is all zeros. I know that band 120 of the file has non-zero double 
precision data in it.


Maybe it's a byte ordering problem

Peter



Lime, Steve D (DNR) wrote:

Is this link of help?

  http://mapserver.org/input/raster.html#special-processing-directives

There's a directive to extract a single band from an n-band image...

Steve

-Original Message-
From: mapserver-users-boun...@lists.osgeo.org 
[mailto:mapserver-users-boun...@lists.osgeo.org] On Behalf Of Peter Willis
Sent: Friday, January 29, 2010 10:41 AM
To: mapserver-users@lists.osgeo.org
Subject: Re: [mapserver-users] How to serve individual RAW binary Bands fromWCS 
as floating point geotiff

Hello,

Interesting... The documentation indicates:

RangeSubset=contents:nearest[Band[120]]

I tried what you suggest but continue to have the same problem.
The mapserver *did not* complain about 'RESAMPLE=BILINEARBand=120'
so I guess we can assume that those parameters work.

Peter


Rahkonen Jukka wrote:

Hi,

I have never used this myself: RangeSubset=contents:nearest[Band[120]]
Is that WCS 1.0.0 parameter?  For selecting bands from WCS 1.0.0 service I have been using something like 
RESAMPLE=BILINEARBand=4


-Jukka Rahkonen-


-Alkuperäinen viesti-
Lähettäjä: mapserver-users-boun...@lists.osgeo.org puolesta: Peter Willis
Lähetetty: pe 29.1.2010 0:26
Vastaanottaja: mapserver-users@lists.osgeo.org
Aihe: Re: [mapserver-users] How to serve individual RAW binary Bands fromWCS as 
floating point geotiff
 
Hello,


The definition of GEOTIFF_FLOAT is as follows:

OUTPUTFORMAT
   NAME GEOTIFF_FLOAT
   DRIVER GDAL/GTiff
   MIMETYPE application/octet-stream
   IMAGEMODE FLOAT32
END

I Added the MIMETYPE as shown to ensure that web browsers
would ask to save the file rather than just open it in an
image viewer from a temp file. (MS Windows...)

That part works fine.


Mostly I am concerned with extracting a specific 'band'
from the multi-band file. The server serves floating point
geotiff just fine.

Peter


Rahkonen Jukka wrote:

Hi,

How have you defined the OUTPUTFORMAT for GEOTIFF_FLOAT?

-Jukka Rahkonen-


Peter Willis wrote:

 

Hello,
I have been setting up a WCS map file for mapserver.
PROBLEM:
Problems arise when rasters are defined
as follows:

LAYER
   NAME SomeDataChannel120
   STATUS OFF
   DEBUG ON
   TYPE RASTER ### required
   PROCESSING BANDS=120
   METADATA
 wcs_label   Data/stuff
 wcs_rangeset_name   'bands'
 wcs_rangeset_label  Stuff thats mapped
 ows_extent '-135 55 -121 46'
 wcs_resolution '0.08332 -0.083332'
 ows_srs 'EPSG:4326'
 wcs_srs 'EPSG:4326'
 wcs_formats 'GEOTIFF_FLOAT,GEOTIFF_INT16'
 wcs_nativeformat 'ENVI'
#wcs_bandcount '563'
 wcs_rangeset_axes 'bands'
   END
   DATA /data/stuff.img
   PROJECTION
 init=epsg:4623
   END
   DUMP TRUE ### required
END


I have  'wcs_bandcount'  commented out because mapserver
causes an internal error in the web server if I define the
band count.

I get floating point geotiff served with the following client request:
http://sparky.com/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=SomeDataChannel120CRS=EPSG:4326BBOX=-135,55,-121,46WIDTH=432HEIGHT=216FORMAT=GEOTIFF_FLOATRangeSubset=contents:nearest[Band[120]]

..however, I always only get the first band from the img file


QUESTIONS:

Is there a problem with this setup?

Do I need to define each band as a layer in the mapfile
or will mapserver WCS allow the client to request a single
channel?

Thanks for any enlightenment,

Peter

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users






___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users





___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] How to serve individual RAW binary Bands from WCS as floating point geotiff

2010-01-28 Thread Peter Willis

Hello,

The definition of GEOTIFF_FLOAT is as follows:

OUTPUTFORMAT
  NAME GEOTIFF_FLOAT
  DRIVER GDAL/GTiff
  MIMETYPE application/octet-stream
  IMAGEMODE FLOAT32
END

I Added the MIMETYPE as shown to ensure that web browsers
would ask to save the file rather than just open it in an
image viewer from a temp file. (MS Windows...)

That part works fine.


Mostly I am concerned with extracting a specific 'band'
from the multi-band file. The server serves floating point
geotiff just fine.

Peter


Rahkonen Jukka wrote:

Hi,

How have you defined the OUTPUTFORMAT for GEOTIFF_FLOAT?

-Jukka Rahkonen-


Peter Willis wrote:

 

Hello,



I have been setting up a WCS map file for mapserver.



PROBLEM:



Problems arise when rasters are defined
as follows:


LAYER
   NAME SomeDataChannel120
   STATUS OFF
   DEBUG ON
   TYPE RASTER ### required
   PROCESSING BANDS=120
   METADATA
 wcs_label   Data/stuff
 wcs_rangeset_name   'bands'
 wcs_rangeset_label  Stuff thats mapped
 ows_extent '-135 55 -121 46'
 wcs_resolution '0.08332 -0.083332'
 ows_srs 'EPSG:4326'
 wcs_srs 'EPSG:4326'
 wcs_formats 'GEOTIFF_FLOAT,GEOTIFF_INT16'
 wcs_nativeformat 'ENVI'
#wcs_bandcount '563'
 wcs_rangeset_axes 'bands'
   END
   DATA /data/stuff.img
   PROJECTION
 init=epsg:4623
   END
   DUMP TRUE ### required
END


I have  'wcs_bandcount'  commented out because mapserver
causes an internal error in the web server if I define the
band count.

I get floating point geotiff served with the following client request:
http://sparky.com/cgi-bin/wcs?REQUEST=GetCoverageSERVICE=WCSVERSION=1.0.0COVERAGE=SomeDataChannel120CRS=EPSG:4326BBOX=-135,55,-121,46WIDTH=432HEIGHT=216FORMAT=GEOTIFF_FLOATRangeSubset=contents:nearest[Band[120]]

..however, I always only get the first band from the img file


QUESTIONS:

Is there a problem with this setup?

Do I need to define each band as a layer in the mapfile
or will mapserver WCS allow the client to request a single
channel?

Thanks for any enlightenment,

Peter

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: Science (Universe) gdal-bin missing EVHR / ITT ENVI binary format [RESOLVED]

2010-01-12 Thread Peter Willis
This is resolved:

The ENVI format is covered as one standard under 'RAW' formats.

'gdal-config --formats' just does not indicate the availability
of this format as there are several formats lumped together
under the banner of 'RAW'.

I guess I should have just tried it before I blurted...

Thanks,

Peter

Peter Willis wrote:
 Hello,
 
 The gdal-bin package for ubuntu 9.04 (and possibly 9.10 ??)
 appears to be missing the RSI/ITT ENVI file format capability.
 
 The package is labeled as being maintained by people on this list.
 My apologies If I am in error regarding the maintainers.
 
 If this *is* the location to contact the maintainers:
 
 This format should be added and included by default in any future
 updates of the package since it is one of the most widely used
 binary scientific data formats.
 
 In the mean time, what are the gdal build parameters used
 when compiling gdal for the package build?
 
 I would like to compile the package for my own use, adding
 the ENVI format to my build.
 
 If there is an updated package that contains this additional format
 that would be even better.
 
 Thanks for any info,
 
 
 Peter
 
 
 
 


-- 
Peter Willis

Remote Sensing Analyst, Programmer, Electronics Technician

ASL Borstad Remote Sensing Inc.

1986 Mills Road
Sidney, British Columbia, Canada V8L5Y3
Tel: 250-656-0177 extension 128

-- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


Re: [suggest] Dependencies listed in Perl module but not RPMforge package

2009-07-02 Thread Peter Willis

Christoph Maser wrote:

Am Mittwoch, den 01.07.2009, 21:34 +0200 schrieb Dag Wieers:

  

Good question. We do have SPEC files that still date back from before we
used an automated process. I just checked what our tool would do
(dar-diff-perl.sh) and it would in fact remove the BuildRequires
altogether. So it is safe to assume that our parser doesn't do this
correctly.

At LinuxTag Christoph told me we should rework our perl-management tools
in perl because all the functionality is available from modules. And I
agree :)




I am currently evaluating the already existing solutions because i think
it is easier and more useful if we use something shared and try to model
it to our needs. My favorite right now is CPANPLUS::Dist::RPM or at
least a small tool build on top of CPANPLUS. Also interesting could be
CPAN::Packager. I will test those and write up the results. Depending on
the results I will take further actions. Anyone interested in the topic
please contact me and share your opinions.
The current target is a) have a tool wich creates new specs from CPAN.
b) triger notifications from updates at CPAN. c) have a tool to
automatically update already existing spec files (i do have a very
simple shell script wich works for ~80% of the specs)

Chris


financial.com AG

Munich head office/Hauptsitz München: Maria-Probst-Str. 19 | 80939 München | 
Germany
Frankfurt branch office/Niederlassung Frankfurt: Messeturm | 
Friedrich-Ebert-Anlage 49 | 60327 Frankfurt | Germany
Management board/Vorstand: Dr. Steffen Boehnert (CEO/Vorsitzender) | Dr. Alexis 
Eisenhofer | Dr. Yann Samson | Matthias Wiederwach
Supervisory board/Aufsichtsrat: Dr. Dr. Ernst zur Linden (chairman/Vorsitzender)
Register court/Handelsregister: Munich – HRB 128 972 | Sales tax ID 
number/St.Nr.: DE205 370 553
___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest
  


How about Ovid?
http://search.cpan.org/~gyepi/Ovid-0.12/ovid

___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


[mapserver-users] Does Mapserver WMS support 'Dimension' ?

2009-06-04 Thread Peter Willis

Hello,

Does mapserver support the WFS specification 'Dimension'
parameters?

If so what parameters do we need in the MAP file?

Thanks

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Does Mapserver WMS support 'Dimension' ?

2009-06-04 Thread Peter Willis

Kralidis,Tom [Ontario] wrote:
 


-Original Message-
From: mapserver-users-boun...@lists.osgeo.org 
[mailto:mapserver-users-boun...@lists.osgeo.org] On Behalf Of 
Peter Willis

Sent: Thursday, 04 June 2009 16:25
To: mapserver-users@lists.osgeo.org
Subject: [mapserver-users] Does Mapserver WMS support 'Dimension' ?

Hello,

Does mapserver support the WFS specification 'Dimension'
parameters?



You mean WMS, I imagine?


Sorry. Yes, WMS is the context.



The only Dimension support in WMS is for Time.


If so what parameters do we need in the MAP file?



See the WMS Time HowTo http://www.mapserver.org/ogc/wms_time.html for
more info on configuration.

..Tom







I see,

I guess I could fudge it and enter the MAP file as:

wms_timeitem WAVELENGTH

That's a bit goofy though


Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] How do I get WFS vector type (LINE, POINT or POLYGON) from mapserver WFS query?

2009-05-28 Thread Peter Willis


Hello,

I am using mapserver as a WFS server.

When I query getCapabilities I get he following capabilities:

- GetCapabilities
- DescribeFeatureType
- GetFeature

I also get a list of features.

If I then make 'describeFeatureType' or 'getFeature' requests for
one of the features on the list I get a non verbose description
of the feature, or the GML for the feature.

There doesn't appear to be anything that discloses if the feature
is a POINT, LINE, or POLYGON type feature.

The map file layers are defined properly as POINT, LINE, POLYGON
as the case may be.

Am I querying mapserver correctly?

What am I overlooking in the XML?

Thanks,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[RESOLVED:] [mapserver-users] How do I get WFS vector type (LINE, POINT or POLYGON) from mapserver WFS query?

2009-05-28 Thread Peter Willis

O.K., that was brief and non-productive

The 'getFeature' query XML results have ms:msGeometry entries.

ie:

ms:msGeometry
-
gml:Polygon srsName=EPSG:4269  HERE!!
-
gml:outerBoundaryIs
-
gml:LinearRing
-
gml:coordinates
/gml:coordinates
/gml:LinearRing
/gml:outerBoundaryIs
/gml:Polygon
/ms:msGeometry


Sorry about that.


Peter





Peter Willis wrote:


Hello,

I am using mapserver as a WFS server.

When I query getCapabilities I get he following capabilities:

- GetCapabilities
- DescribeFeatureType
- GetFeature

I also get a list of features.

If I then make 'describeFeatureType' or 'getFeature' requests for
one of the features on the list I get a non verbose description
of the feature, or the GML for the feature.

There doesn't appear to be anything that discloses if the feature
is a POINT, LINE, or POLYGON type feature.

The map file layers are defined properly as POINT, LINE, POLYGON
as the case may be.

Am I querying mapserver correctly?

What am I overlooking in the XML?

Thanks,

Peter

___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] Mapserver/PostGIS map file problem (double quotes in layer 'DATA' element)

2009-05-27 Thread Peter Willis


Hello,

I am having a problem serving a PostGIS layer via mapserver
as WFS.

The problem arises from the generation/use of column names
in PostgreSQL that require double quotes.

ie:

 SELECT oid,gid, the_geom, Area,Perimeter,PixelValue FROM 
global_Land_poly WHERE PixelValue=1;




In the map file the 'DATA' member of the PostGIS layer is defined as:

DATA the_geom from (select oid,gid, the_geom, Area,Perimeter,PixelValue 
FROM global_Land_poly WHERE PixelValue=1 ) AS FOO using SRID=4326



ERRMapserver relays a PostGIS error from PostgreSQL:
'ERROR:  column area does not exist.../ERR

This is because the column name is actually Area
and requires quotes.

How do I define double quotes in my PostGIS query
within the 'DATA' element of my mapfile layer?

Thanks for any enlightenment,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mapserver/PostGIS map file problem (double quotes in layer 'DATA' element)

2009-05-27 Thread Peter Willis


I tried that. I get the following error:

loadLayer(): Unknown identifier. Parsing error near (Area):(line 30)

Mapserver doesn't appear to like the additional formatting.
Do I need to recompile with system regex I wonder?

Peter


Adam Eskreis wrote:

You could try regex

\Area\

-Adam

On Wed, May 27, 2009 at 6:48 PM, Peter Willis pet...@borstad.com 
mailto:pet...@borstad.com wrote:



Hello,

I am having a problem serving a PostGIS layer via mapserver
as WFS.

The problem arises from the generation/use of column names
in PostgreSQL that require double quotes.

ie:

 SELECT oid,gid, the_geom, Area,Perimeter,PixelValue FROM
global_Land_poly WHERE PixelValue=1;



In the map file the 'DATA' member of the PostGIS layer is defined as:

DATA the_geom from (select oid,gid, the_geom,
Area,Perimeter,PixelValue FROM global_Land_poly WHERE PixelValue=1 )
AS FOO using SRID=4326


ERRMapserver relays a PostGIS error from PostgreSQL:
'ERROR:  column area does not exist.../ERR

This is because the column name is actually Area
and requires quotes.

How do I define double quotes in my PostGIS query
within the 'DATA' element of my mapfile layer?

Thanks for any enlightenment,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org mailto:mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users





___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] [RESOLVED:] Mapserver/PostGIS map file problem (double quotes in layer 'DATA' element)

2009-05-27 Thread Peter Willis

After a bit of experimentation I have discovered that
using single quotes, to enclose the element value, allows
double quotes to be used in the query.

ie:

DATA 'the_geom from (select oid,gid, the_geom, 
Area,Perimeter,PixelValue FROM global_Land_poly WHERE 
PixelValue=1 ) AS FOO using SRID=4326'



Thanks to all,

Peter


Peter Willis wrote:


I tried that. I get the following error:

loadLayer(): Unknown identifier. Parsing error near (Area):(line 30)

Mapserver doesn't appear to like the additional formatting.
Do I need to recompile with system regex I wonder?

Peter


Adam Eskreis wrote:

You could try regex

\Area\

-Adam

On Wed, May 27, 2009 at 6:48 PM, Peter Willis pet...@borstad.com 
mailto:pet...@borstad.com wrote:



Hello,

I am having a problem serving a PostGIS layer via mapserver
as WFS.

The problem arises from the generation/use of column names
in PostgreSQL that require double quotes.

ie:

 SELECT oid,gid, the_geom, Area,Perimeter,PixelValue FROM
global_Land_poly WHERE PixelValue=1;



In the map file the 'DATA' member of the PostGIS layer is defined as:

DATA the_geom from (select oid,gid, the_geom,
Area,Perimeter,PixelValue FROM global_Land_poly WHERE PixelValue=1 )
AS FOO using SRID=4326


ERRMapserver relays a PostGIS error from PostgreSQL:
'ERROR:  column area does not exist.../ERR

This is because the column name is actually Area
and requires quotes.

How do I define double quotes in my PostGIS query
within the 'DATA' element of my mapfile layer?

Thanks for any enlightenment,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org 
mailto:mapserver-users@lists.osgeo.org

http://lists.osgeo.org/mailman/listinfo/mapserver-users





___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users






--
Peter Willis

Remote Sensing Analyst, Programmer, Electronics Technician

ASL Borstad Remote Sensing Inc.

1986 Mills Road
Sidney, British Columbia, Canada V8L5Y3
Tel: 250-656-0177 extension 135
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mapserver/PostGIS map file problem (double quotes in layer 'DATA' element)

2009-05-27 Thread Peter Willis

Hello,

Darn!, I just sent a [RESOLVED] with the same solution.
(just as I was receiving your email...)
Yes, this is how I managed to to make it work.

Everything works now.
I guess you get the badge for that one.

Thanks for your help.

Peter



Simon Haddon wrote:

Have you tried

DATA 'the_geom from (select oid,gid, the_geom,
 Area,Perimeter,PixelValue FROM global_Land_poly WHERE
PixelValue=1 )
 AS FOO using SRID=4326'

The other options is to change the table name and column names to be lower
case or case insensitive. Mixed case table and column names are always a
pain in any database system.

If you can't modify the table then try creating a view and using the view
in your query instead. Make sure your view is created all lower case
without quotes.  This will probably mean you will need to alias your select
to lower case the column names and you will probably have to add it to the
postgis geometry tables manually but other than that it will work.

Cheers,
Simon


___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mapfile HDF 'DATA' element format

2009-04-29 Thread Peter Willis

Frank Warmerdam wrote:


One SDS with more than one band (ie. rank 3 with the third rank
more than a dimension of 1) is accessed normally as long as you want
to use the first three bands as RGB.  If you want to control which bands
you use, add the BANDS PROCESSING option.

eg.

PROCESSING BANDS=4,2,1

The tricky case is more than one SDS which is normally when you
start seeing subdatasets.  Normally you would use gdalinfo to
identify what subdatasets are available.




Hello Frank,

Thanks for clarifying. Maybe I should be filling out
the wiki somewhere regarding this information.

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] What is the WCS 'mime type' for *RAW* HDF data?

2009-04-29 Thread Peter Willis

Hello,

When I query WCS for information regarding an HDF file

I am assuming that *I* need to define a mime type, for
the data provided by the HDF file, since no mime type is
apparent in the GetCapabilities request.

What 'mime' type do I use in my 'GetCoverage' request
if I just want to serve the raw binary record
(ie: Scientific/Remote Sensing Data in D.N.)
instead of an image format?

Shouldn't I be able to just get the data?
No I need to specify SOAP output, somehow,
in my MAP file?

Thanks for any pointers,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


[mapserver-users] Mapfile HDF 'DATA' element format

2009-04-28 Thread Peter Willis

Hello,

Where can I find the specification for the
'DATA' entry in mapserver MAP files where
HDF files are being used?

I am mostly interested in using these in
the context of WCS.

Here is my current non-working example of an HDF
file test in my current WCS map file:

LAYER
  NAME chlorophyll
  METADATA
wcs_label   L3MG8D9KM/Chlorophyll
wcs_rangeset_name   Range 1
wcs_rangeset_label  Chlorophyll_DN
  END
  TYPE RASTER
  STATUS ON
  DATA 'HDF4:/public/A20081932008200.L3m_8D_CHLO_9.hdf://l3m_data'
  PROCESSING   BANDS=1
  PROJECTION
init=epsg:4326
  END
  DUMP TRUE
END


I am not sure I have the format of the DATA entity correct.

Thanks for any pointers,

Peter
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [mapserver-users] Mapfile HDF 'DATA' element format

2009-04-28 Thread Peter Willis

Frank Warmerdam wrote:

Peter Willis wrote:

Hello,

Where can I find the specification for the
'DATA' entry in mapserver MAP files where
HDF files are being used?

I am mostly interested in using these in
the context of WCS.

Here is my current non-working example of an HDF
file test in my current WCS map file:

...

  DATA 'HDF4:/public/A20081932008200.L3m_8D_CHLO_9.hdf://l3m_data'


Peter,

You need to use the gdalinfo command on the .hdf base file to get
a list of subdatasets within the file.  If the gdalinfo reports something
like:


SUBDATASET_16_NAME=HDF4_EOS:EOS_SWATH:MOD07_L2.A2000110.0220.002.2000196104217.hdf:mod07:Retrieved_Moisture_Profile 

  SUBDATASET_16_DESC=[20x406x270] Retrieved_Moisture_Profile mod07 
(16-bit integer)
SUBDATASET_17_NAME=HDF4_EOS:EOS_SWATH:MOD07_L2.A2000110.0220.002.2000196104217.hdf:mod07:Retrieved_Height_Profile 

  SUBDATASET_17_DESC=[20x406x270] Retrieved_Height_Profile mod07 (16-bit 
integer)


Then you might put the following in your .map file:

  DATA 
'HDF4_EOS:EOS_SWATH:MOD07_L2.A2000110.0220.002.2000196104217.hdf:mod07:Retrieved_Moisture_Profile' 




The key is to use _NAME portion of the subdatasets reported by gdalinfo.

Best regards,


Hello Frank,

I don't get any of those entries when I run gdalinfo against the file.
There is one SDS in the root of the HDF file called 'l3m_data'.

Here is an example of what I get with gdalinfo:

 some_yo...@linux-svn:/public# gdalinfo A20081932008200.L3m_8D_CHLO_9.hdf

 Driver: HDF4Image/HDF4 Dataset
 Size is 4320, 2160
 Coordinate System is `'
 Metadata:
   Product Name=A20081932008200.L3m_8D_CHLO_9
   Sensor Name=MODISA
   Sensor=
   Title=MODISA Level-3 Standard Mapped Image
   Data Center=
   Station Name=
   Station Latitude=0
   Station Longitude=0
   Mission=
   Mission Characteristics=
   Sensor Characteristics=
   Product Type=8-day
   Replacement Flag=ORIGINAL
   Software Name=smigen
   Software Version=3.60
   Processing Time=200821020255
   Input Files=A20081932008200.L3b_8D.main
   Processing Control=smigen par=A20081932008200.L3m_8D_CHLO_9.param
   Input Parameters=IFILE = 
/data1/sdpsoper/vdc/vpu4/workbuf/A20081932008200.L3b_8D.main|OFILE = 
A20081932008200.L3m_8D_CHLO_9|PFILE = |PROD = chlor_a|PALFILE = 
DEFAULT|RFLAG = ORIGINAL|MEAS = 1|STYPE = 0|DATAMIN = 0.00|DATAMAX = 
0.00|LONWEST = -180.00|LONEAST = 180.00|LATSOUTH = 
-90.00|LATNORTH = 90.00|RESOLUTION = 9km|PROJECTION = 
RECT|GAP_FILL = 0|SEAM_LON = -180.00|PRECISION=I
   L2 Flag 
Names=ATMFAIL,LAND,HILT,HISATZEN,STRAYLIGHT,CLDICE,COCCOLITH,LOWLW,CHLFAIL,PRODFAIL,CHLWARN,NAVWARN,MAXAERITER,ATMWARN,HISOLZEN,NAVFAIL,FILTER,HIGLINT

   Period Start Year=2008
   Period Start Day=193
   Period End Year=2008
   Period End Day=200
   Start Time=200819307727
   End Time=2008201023506024
   Start Year=2008
   Start Day=193
   Start Millisec=7727
   End Year=2008
   End Day=201
   End Millisec=9306024
   Start Orbit=0
   End Orbit=0
   Orbit=0
   Map Projection=Equidistant Cylindrical
   Latitude Units=degrees North
   Longitude Units=degrees East
   Northernmost Latitude=90
   Southernmost Latitude=-90
   Westernmost Longitude=-180
   Easternmost Longitude=180
   Latitude Step=0.0834
   Longitude Step=0.0834
   SW Point Latitude=-89.95834
   SW Point Longitude=-179.9583
   Data Bins=7128918
   Number of Lines=2160
   Number of Columns=4320
   Parameter=Chlorophyll a concentration
   Measure=Mean
   Units=mg m^-3
   Scaling=logarithmic
   Scaling Equation=Base**((Slope*l3m_data) + Intercept) = Parameter value
   Base=10
   Slope=5.813776e-05
   Intercept=-2
   Scaled Data Minimum=0.01
   Scaled Data Maximum=64.5654
   Data Minimum=0.005747
   Data Maximum=99.93153
   Scaling=logarithmic
   Scaling Equation=Base**((Slope*l3m_data) + Intercept) = Parameter value
   Base=10
   Slope=5.813776e-05
   Intercept=-2
 Corner Coordinates:
 Upper Left  (0.0,0.0)
 Lower Left  (0.0, 2160.0)
 Upper Right ( 4320.0,0.0)
 Lower Right ( 4320.0, 2160.0)
 Center  ( 2160.0, 1080.0)
 Band 1 Block=4320x1 Type=UInt16, ColorInterp=Gray

 some_yo...@linux-svn:/public#
___
mapserver-users mailing list
mapserver-users@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-users


Re: [Qgis-user] Need Help Testing of WMS Authentication

2009-04-20 Thread Peter Willis

gsherman wrote:

I have implemented basic HTTP authentication for the QGIS WMS provider
and am looking for some volunteers to test it. The changes have not been
committed to the QGIS SVN repository. Testing will require
compiling from source.

When creating or editing WMS connection in QGIS you can now specify an
optional username and password to be used with a protected WMS. These
are stored in your QGIS settings. If you don't want the password stored
you can leave it blank and be prompted for it at connection time.

I haven't had time to prepare a set of patches but you can download the
source tree (20Mb) that includes the WMS authentication from:

  http://gisalaska.com/qgis_wms_auth.zip

I am particularly interested in testing by folks that require a proxy.

Thanks,

-gary


What compiler/version are you using for the build?
I have cygwin (gcc), MS VC 5, MS Visual Studio .NET 2003
but not MinGW.

Peter
___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


Re: [Qgis-user] WFS/WMS layers behind HTTP/HTTPS basic auth

2009-04-17 Thread Peter Willis




Is there any way to use HTTP basic auth with QGIS WMS?


Currently there is no support for this but its something a few people
have requested now. I recommend adding an enhancement ticket to trac and
we will hopefully add support for this in the near future.

Thanks

Tim



Is there already a ticket for this in track?
Does the priority increase as more people ask
for a specific feature?

As a note; uDig handles WMS behind basic auth.
You may want to peruse their source code before
reinventing. Of course, they are programming in java
so it's difficult to tell how much of the basic auth
is being handled directly by java as opposed to
the actual program handling it directly.

Peter
___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


Re: WG: [Qgis-user] WMS Layers TITLE entity

2009-04-14 Thread Peter Willis

Hello,

Thanks for the information.

Which version of QGIS is the OSGEO 1.0.1-Kore ?
With that install/version I am still just getting the server name
as the default layer name.

I'll ask at OSGEO to see who is doing the package builds...

Peter


Hugentobler Marco wrote:



-Ursprüngliche Nachricht-
Von: Hugentobler  Marco
Gesendet: Di 14.04.2009 22:25
An: pet...@borstad.com
Betreff: AW: [Qgis-user] WMS Layers TITLE entity
 


Hi Peter

QGIS now selects the WMS layer name(s) as a first legend name (at least for 
version = 1.1). In older version, it was the server name

Regards,
Marco


___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


Re: [Qgis-user] Relational Databases and PostGIS formatting of Vector Data

2009-04-09 Thread Peter Willis

Alex Mandel wrote:

I'm a little lost here, in my experience a vector layer becomes a table,
not multiple tables and all the geometries are stored in a blob column
no matter what type it is.


That's what I'm curious about. Each vector is becoming a table
unto itself. That's not proper normal form for a relational database.



Of course if you have multiple vector types in the same table this can
cause issues with various spatial operations, so you either need to
separate them out or subquery when you want to perform spatial
operations to make sure you only use compatible types.


Yes, this is exactly what you want when using a relational database
in the proper manner.


What tool did you use to import the layer into POSTGIS?


Quantum GIS.

___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


[Qgis-user] WFS/WMS layers behind HTTP/HTTPS basic auth

2009-04-09 Thread Peter Willis

Hello,

I am wondering if there is a way to get QGIS to request
the basic auth user name and password for WMS/WFS URLs
that are protected by HTTP basic auth.

When I put a URL (that uses basic auth) into the
WMS server connections I just get an error that
the return message is misunderstood when QGIS attempts
to connect.

Is there any way to use HTTP basic auth with QGIS WMS?

Thanks

Peter
___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


[Qgis-user] Problem Connecting to PostgreSQL database for PostGIS on localhost in MS Windows

2009-04-08 Thread Peter Willis

Hello,

I have a PostGIS enabled database on my MS windows
workstation. I can connect to the database fine
with other tools such as pgadmin.

When I try to use the 'Shapefile to PostGIS Import Tool'
I can't connect to PostgreSQL on localhost.

Is there anything additional that I need to do to make
the connection on windows via localhost?

Thanks

Peter
___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


Re: [Qgis-user] Problem Connecting to PostgreSQL database for PostGIS on localhost in MS Windows [RESOLVED]

2009-04-08 Thread Peter Willis

Hello,

I just resolved this by installing version 1.0
using the OSGEO installer.

Everything appears to work fine now.

Thanks

Peter

Peter Willis wrote:

Hello,

I have a PostGIS enabled database on my MS windows
workstation. I can connect to the database fine
with other tools such as pgadmin.

When I try to use the 'Shapefile to PostGIS Import Tool'
I can't connect to PostgreSQL on localhost.

Is there anything additional that I need to do to make
the connection on windows via localhost?

Thanks

Peter

___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


[Qgis-user] Relational Databases and PostGIS formatting of Vector Data

2009-04-08 Thread Peter Willis

Hello,

I just ingested a MULTIPOLYGON vector into a PostGIS enabled
database and realized that each vector becomes a unique TABLE
in the database.

Is this really necessary?

Why not use proper relational database techniques and have
all vectors of a specific type go into a single table
with a unique ID for rows that belong to a specific
vector?

Shouldn't I have tables named:

LINES
MULTIPOLYGONS
POLYGONS
POINTS

that link to a table named VECTORS by a unique ID.

OR!! maybe a single table called VECTOR_GEOGRAPHY that
has a geography column for each of LINE,MULTIPOLYGON,POLYGON, and POINT
plus a VECTOR_TYPE column to indicate which column the geography resides 
in. Having NULL as a default for these columns would make an easy check

for availability of the type for the current vector row.
This would also allow for trigger functions to automatically fill out
the geography of the other vector types, in the current row, by
extracting them from a higher order entry (ie: extract lines, points and 
centroids from polygons).


Attributes should also be associated to the vector geography
indirectly and placed in a series of tables something like:

VECTOR_ATTRIBUTES---NAME
|---VALUE_TYPE
|---VECTOR_ID
|---VECTOR_ITEM_ID
|---THIS_ATTRIBUTE_ID

VECTOR_ATTRIBUTE_VALUES---THIS_VALUE_ID
  |---ATTRIBUTE_ID
  |---VALUE_BASE_64_ENCODED

I realize that this causes some overhead but it would make
querying available vector coverages and attributes
a bit easier than having to change tables for each individual
vector.

There is also the added benefit pf being able to query for
vector entities and sub-entities that fall within a specific
viewing area. Thus you wouldn't need to read-out/redraw the
complete vector if you're not looking at a broad enough
scale to see it, improving rendering time for large composite
vectors.

If we're going to use a database, we should make use of the
facilities provided by a database and stop thinking in terms of
flat files from the 1970s.

My opinions,

Peter
___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user


Re: [SQL] FUNCTION problem

2009-04-03 Thread Peter Willis

Adrian Klaver wrote:

On Friday 03 April 2009 6:51:05 am Adrian Klaver wrote:

On Thursday 02 April 2009 6:16:44 pm Adrian Klaver wrote:

Now I remember. Its something that trips me up, the RECORD in RETURN
setof RECORD is not the same thing as the RECORD in DECLARE RECORD. See
below for a better explanation-
http://www.postgresql.org/docs/8.3/interactive/plpgsql-declarations.html#
PL PGSQL-DECLARATION-RECORDS Note that RECORD is not a true data type,
only a placeholder. One should also realize that when a PL/pgSQL function
is declared to return type record, this is not quite the same concept as
a record variable, even though such a function might use a record
variable to hold its result. In both cases the actual row structure is
unknown when the function is written, but for a function returning record
the actual structure is determined when the calling query is parsed,
whereas a record variable can change its row structure on-the-fly.



--
Adrian Klaver
akla...@comcast.net

For this particular case the following works.

CREATE OR REPLACE FUNCTION test_function(integer) RETURNS record
AS $Body$
DECLARE croid integer;
DECLARE R RECORD;
BEGIN
SELECT INTO croid 2;
SELECT INTO R  croid,$1;
RETURN R;
END;

$Body$
LANGUAGE plpgsql;

--
Adrian Klaver
akla...@comcast.net


Forgot to show how to call it.

test=# SELECT * from test_function(1) as test(c1 int,c2 int);
 c1 | c2
+
  2 |  1
(1 row)




Ah!, I see what you mean about the definition of 'RECORD'.
(The lights come on...)

And here I thought it would all be so simple.

You show a valid, and most informative solution.
This should get things working for me.

Thank you very much for your help.

Peter

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] FUNCTION problem

2009-04-03 Thread Peter Willis

Adrian Klaver wrote:



If you are using Postgres 8.1+ then it becomes even easier because you can use OUT parameters 
in the function argument list to eliminate the as test(c1 int,c2 int) clause. At 
this point it becomes a A--B--C problem i.e determine what your inputs are, how you 
want to process them and how you want to return the output.



'8.1+'?? Hmmm, I'm using 8.3. I could use that.

I got the more complex version of the query to work
by backing away from 'plpgsql' as the language and using
'sql' instead.

I then nested (terribly ugly) my select statements to
generate a single SQL query from all. This allows
me to change the output of the query without needing
to define a new set of output 'OUT' parameters each time
I change things.

I have use of the 'OUT' parameters with another set
of functions though. Thanks for that.

Peter

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] FUNCTION problem

2009-04-02 Thread Peter Willis

Adrian Klaver wrote:



Did you happen to catch this:
Note that functions using RETURN NEXT or RETURN QUERY must be called as a table 
source in a FROM clause

Try:
select * from test_function(1)



I did miss that, but using that method to query the function
didn't work either. Postgres doesn't see the result as a
tabular set of records.

Even if I replace the FOR loop with:

quote
FOR R IN SELECT * FROM pg_database LOOP
RETURN NEXT R;
END LOOP;

/quote

I get the same error(s). I don't think postgres likes
the unrelated 'SELECT INTO variable [column] FROM [QUERY] LIMIT 1'
lines before the FOR loop...

I think I need to go back and approach the function from a
different direction.

Thanks for all the pointers.

Peter

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] FUNCTION problem

2009-04-01 Thread Peter Willis

Hello,

I am having a problem with a FUNCTION.
The function creates just fine with no errors.

However, when I call the function postgres produces an error.

Perhaps someone can enlighten me.


--I can reproduce the error by making a test function
--that is much easier to follow that the original:

CREATE OR REPLACE FUNCTION test_function(integer)
  RETURNS SETOF RECORD AS
$BODY$
  DECLARE croid integer;
  BEGIN

--PERFORM A SMALL CALCULATION
--DOESNT SEEM TO MATTER WHAT IT IS

SELECT INTO croid 2;

--A SELECT STATEMENT OUTPUTS RECORDS (one in this case)
SELECT croid,$1;
  END;

$BODY$
  LANGUAGE 'plpgsql' VOLATILE




--The call looks like the following:

SELECT test_function(1);





--The resulting error reads as follows:

ERROR:  query has no destination for result data
HINT:  If you want to discard the results of a SELECT, use PERFORM instead.
CONTEXT:  PL/pgSQL function test_function line 5 at SQL statement

** Error **

ERROR: query has no destination for result data
SQL state: 42601
Hint: If you want to discard the results of a SELECT, use PERFORM instead.
Context: PL/pgSQL function test_function line 5 at SQL statement

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [GENERAL] Proper entry of polygon type data

2009-03-25 Thread Peter Willis

Hi Brent,

I am aware of PostGIS and already use it. My question was regarding
the entry format of PostgreSQL polygon data. There is a void
in the PostgreSQL documentation regarding this.

Incidentally, PostGIS uses PostgreSQL polygon, point, and path
data types.

Using PostGIS for simple , non-geographic, polygon rules is a
bit like using a tank to kill a mosquito.

Peter

Brent Wood wrote:

Hi Peter,

If you want to use Postgres to store/manage/query spatial data, I strongly 
recommend you look at PostGIS,  not the native Postgres geometry types.


Brent Wood

Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand

Peter Willis pet...@borstad.com 03/24/09 10:35 AM 

Hello,

I would like to use 'polygon' type data and am wondering about
the entry format of the vertex coordinates.

Are the coordinates of the polygon type to be entered one
entry per polygon vertex, or one entry per polygon edge segment?

For example:
I have a triangle with vertex corners A, B, C.

One entry per vertex format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy)) );


One entry per edge format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Bx,By),(Cx,Cy),(Cx,Cy),(Ax,Ay)) );

Which entry format is the correct one?

If per vertex format is the correct one, do I need to
'close' the path by entering the first vertex again at the end of the
list?

ie:
INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy),(Ax,Ay)) );

Thanks,

Peter




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper entry of polygon type data

2009-03-25 Thread Peter Willis

Mark Cave-Ayland wrote:


Peter Willis wrote:

Incidentally, PostGIS uses PostgreSQL polygon, point, and path
data types.


Errr... no it doesn't. PostGIS uses its own internal types to represent 
all the different geometries, although it does provide a cast between 
the existing PostgreSQL types as an aid for people wishing to migrate.


I stand corrected I guess.
The last time I looked at the actual guts of PostGIS was
WY back. And that was long before I actually started
using it.

...of course, my WAAA back memory may be crossed with my WAAAY
back forgetfulness, there as well

Peter

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[SQL] Proper entry of polygon type data

2009-03-24 Thread Peter Willis

Hello,

I would like to use 'polygon' type data and am wondering about
the entry format of the vertex coordinates.

Are the coordinates of the polygon type to be entered one
entry per polygon vertex, or one entry per polygon edge segment?

For example:
I have a triangle with vertex corners A, B, C.

One entry per vertex format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy)) );


One entry per edge format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Bx,By),(Cx,Cy),(Cx,Cy),(Ax,Ay)) );

Which entry format is the correct one?

If per vertex format is the correct one, do I need to
'close' the path by entering the first vertex again at the end of the
list?

ie:
INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy),(Ax,Ay)) );

Thanks,

Peter


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[GENERAL] Proper entry of polygon type data

2009-03-23 Thread Peter Willis

Hello,

I would like to use 'polygon' type data and am wondering about
the entry format of the vertex coordinates.

Are the coordinates of the polygon type to be entered one
entry per polygon vertex, or one entry per polygon edge segment?

For example:
I have a triangle with vertex corners A, B, C.

One entry per vertex format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy)) );


One entry per edge format suggests

INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Bx,By),(Cx,Cy),(Cx,Cy),(Ax,Ay)) );

Which entry format is the correct one?

If per vertex format is the correct one, do I need to
'close' the path by entering the first vertex again at the end of the
list?

ie:
INSERT INTO my_table (my_polygon_column)
VALUES ( ((Ax,Ay),(Bx,By),(Cx,Cy),(Ax,Ay)) );

Thanks,

Peter

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [suggest] package for Net::MAC perl module

2008-09-18 Thread Peter Willis
While we're talking about the retarded Red Hat perl rpm, is anyone aware 
that perl 5.8 as shipped with RHEL has an unpatched bug which can cause 
an exponential slowdown when using overloading?

(This might have been adressed here already, and if so please ignore me :-X)

This blog post contains the details: 
http://blog.vipul.net/2008/08/24/redhat-perl-what-a-tragedy/


I incorporated the patches from a core perl dev 
http://use.perl.org/~nicholas/journal/37274 which through some simple 
testing seems to fix the slowdowns while allowing one to retain all the 
Red Hat perl patches.  This is not the fedora patch which still has the 
potential for slowdowns.


As far as @INC is concerned, we patch our RHEL4 perl to use site_perl 
before vendor_perl (RHEL5 already does it this way). I've also started 
building my perl modules with /usr/local prefix for mandir, datadir, etc 
to avoid any conflicting files in the base perl package.


Dag Wieers wrote:

On Wed, 17 Sep 2008, Dave Cross wrote:


Dag Wieers wrote:


That is why I think we need to somehow make Red Hat understand that
they should not add CPAN distributions to the perl package.


It's not Red Hat that does this. The standard Perl distribution contains
modules that are also distributed independently from CPAN. They are
known as dual-life modules.


Right. But the fact that they are part of the perl RPM and not 
packaged as seperate RPMs makes it hard to replace them. Nothing 
forces Red Hat to ship them with the perl RPM.


In the past I thought it was because Red Hat did not want those 
modules to be replaced (given they have to support the system) and the 
default module path ctually confirmed that.


But since RHEL5 they allow to replace modules from site_perl because 
they changed the order, so there goes the bit of reverse psychology :) 
So I would have expected them to package them seperately so that at 
least it is more obvious from the package database that a certain RPM 
was in fact replaced.






perl58-slow-overloading.patch.gz
Description: application/gzip
___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


Re: [suggest] package for Net::MAC perl module

2008-09-18 Thread Peter Willis
Well how about that! Released yesterday... Well nevermind my patch then. 
(We could have used that 3 weeks ago when our biggest perl product 
launched, but it's good that they're finally getting to the fix)


Niels de Vos wrote:

Hi Ralph,

you probably meant: http://rhn.redhat.com/errata/RHBA-2008-0876.html

Cu,
Niels

On Thu, Sep 18, 2008 at 5:42 PM, Ralph Angenendt
[EMAIL PROTECTED]  wrote:
   

Peter Willis wrote:
 

While we're talking about the retarded Red Hat perl rpm, is anyone aware
that perl 5.8 as shipped with RHEL has an unpatched bug which can cause
an exponential slowdown when using overloading?
(This might have been adressed here already, and if so please ignore me :-X)
   

Especially as a patch has been released:

I think it'shttp://rhn.redhat.com/errata/RHBA-2008-0883.html, but
that is down at the moment.

Cheers,

Ralph

___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


 

___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest
   


___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


[suggest] perl modules with conflicting man pages

2008-06-30 Thread Peter Willis
Hi, there are a couple perl modules in rpmforge which upgrade older 
versions in perl core, but the perl modules have man pages which exist 
in perl core and prevent them from being installed. These seem to be 
requirements for other perl modules so it is preventing yum from 
installing other modules. Here are some example errors from yum:


  file /usr/share/man/man3/IO::Socket::UNIX.3pm.gz from install of 
perl-IO-1.2301-1.el5.rf conflicts with file from package 
perl-5.8.8-10.el5_0.2


  file /usr/bin/enc2xs from install of perl-Encode-2.25-1.el5.rf 
conflicts with file from package perl-5.8.8-10.el5_0.2
  file /usr/share/man/man3/Encode.3pm.gz from install of 
perl-Encode-2.25-1.el5.rf conflicts with file from package 
perl-5.8.8-10.el5_0.2


  file /usr/share/man/man3/Getopt::Long.3pm.gz from install of 
perl-Getopt-Long-2.37-1.el5.rf conflicts with file from package 
perl-5.8.8-10.el5_0.2


  file /usr/share/man/man3/Test::Harness.3pm.gz from install of 
perl-Test-Harness-3.11-1.el5.rf conflicts with file from package 
perl-5.8.8-10.el5_0.2


___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


[suggest] request for multiple perl modules

2008-06-18 Thread Peter Willis
I just started using rpmforge and it's minimized the packaging I need to 
do by a great amount. However there are still some perl modules which I 
need that rpmforge does not provide, so i'd like to ask if they could be 
added to the list of maintained packages. The version numbers might not 
be the absolute latest, they are from the cpan modules list I downloaded 
2 days ago but these versions should be plenty new enough for me. An 
example URL to prepend to the tarball paths: 
http://cpan.mirror.facebook.com/modules/by-authors/id/


Apache::AuthenNIS  0.13  
S/SP/SPEEVES/Apache-AuthenNIS-0.13.tar.gz
Apache::AuthzNIS   0.11  
S/SP/SPEEVES/Apache-AuthzNIS-0.11.tar.gz
Apache::SubProcess 0.03  
D/DO/DOUGM/Apache-SubProcess-0.03.tar.gz
Business::PayPal   0.02  
M/MO/MOCK/Business-PayPal-0.02.tar.gz

CLASS  1.00  M/MS/MSCHWERN/CLASS-1.00.tar.gz
CPANPLUS   0.84  K/KA/KANE/CPANPLUS-0.84.tar.gz
Config::INI::Simple0.02  
K/KI/KIRSLE/Config-INI-Simple-0.02.tar.gz

DBD::Oracle1.21  P/PY/PYTHIAN/DBD-Oracle-1.21.tar.gz
GIFgraph   1.20  M/MV/MVERB/GIFgraph-1.20.tar.gz
IniConf1.03  R/RB/RBOW/IniConf-1.03.tar.gz
Net::NIS   0.43  E/ES/ESM/Net-NIS-0.43.tar.gz
Net::SFTP  0.10  D/DB/DBROBINS/Net-SFTP-0.10.tar.gz
Net::SFTP::Foreign 1.38  
S/SA/SALVA/Net-SFTP-Foreign-1.38.tar.gz
Object::MultiType  0.05  
G/GM/GMPASSOS/Object-MultiType-0.05.tar.gz
Perl::Tidy 20071205  
S/SH/SHANCOCK/Perl-Tidy-20071205.tar.gz

Plagger0.007017  M/MI/MIYAGAWA/Plagger-0.7.17.tar.gz
Proc::PidUtil  0.08  M/MI/MIKER/Proc-PidUtil-0.08.tar.gz
RTSP::Lite  0.1  N/NA/NABESHIMA/RTSP-Lite-0.1.tar.gz
Template::Provider::Encoding   0.10  
M/MI/MIYAGAWA/Template-Provider-Encoding-0.10.tar.gz
Term::Encoding 0.02  
M/MI/MIYAGAWA/Term-Encoding-0.02.tar.gz

Term::Size  0.2  T/TI/TIMPX/Term-Size-0.2.tar.gz
Term::Visual   0.08  
L/LU/LUNARTEAR/Term-Visual-0.08.tar.gz

Text::Tags 0.04  G/GL/GLASSER/Text-Tags-0.04.tar.gz
WWW::Curl  4.00  S/SZ/SZBALINT/WWW-Curl-4.00.tar.gz
XML::Feed  0.12  B/BT/BTROTT/XML-Feed-0.12.tar.gz
XML::RSS::LibXML 0.3002  
D/DM/DMAKI/XML-RSS-LibXML-0.3002.tar.gz
XML::Smart 1.006009  
G/GM/GMPASSOS/XML-Smart-1.6.9.tar.gz




Thanks!
___
suggest mailing list
suggest@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest


Bug#239073: =?koi8-r?b?PT9LT0k4LVI/UT89RjQ9QzU9QzI9RDEgPUQxID1EMz1ERj1DNT1DRCA9Q0I9QzE9Q0IgPUMyPUQ1

2006-11-01 Thread Peter Willis
PUQ0PUM1PUQyPUMyPUQyPUNGPUM0LD89?=
Date: Wed, 1 Nov 2006 12:23:49 -0060
MIME-Version: 1.0
Content-Type: text/plain;
format=flowed;
charset=koi8-r;
reply-type=original
Content-Transfer-Encoding: 8bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 5.00.2314.1300
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2314.1300

÷ ÏÔ×ÅÔ ÎÁ Ô×ÏÅ ÐÉÓØÍÏ

îÁ ÈÕÊ - ÎÅ ÎÁ ÓÏÂÒÁÎÉÅ, Ñ×ËÁ ÎÅ ÏÂÑÚÁÔÅÌØÎÁ! 

ó×ÏÊ ÈÕÊ ÐÏÌÁËÏÍÉÔØ É ÃÅÌËÕ
é ÎÁÛ ìÕËÁÛËÁ, ÂÅÄÎÙÊ ÍÁÌÙÊ,
é ÐÒÅÂÏÌØÛÉÅ ÅÌÄÁËÉ.
÷ÉÄ ÏÎ ÉÍÅÌ ÍÏÌÏÄÃÅ×ÁÔÙÊ:
ïÄÅÎØ ÐÒÉÌÉÞÎÅÅ ìÕËÕ
ïÎÁ ÅÝÅ ÓÉÌØÎÅÊ ÏÒÅÔ,
é ×ÏÔ ÔÏÍÌÅÎØÑ ÍÕËÉ ÓÔÒÁÓÔÎÏÊ,
ïÎ ÐÒÉ ðÅÔÒÅ ÉÚ×ÅÓÔÅÎ ÓÔÁÌ.
ïÎÁ ËÒÉÞÉÔ - ìÕËÁ ÎÅ ÓÌÙÛÉÔ,
ëÁË ÅÓÔØ ÄÏ ÓÁÍÙÈ ÄÏ ÐÏÒÔÏË.
é ÇÒÕÓÔØ ÎÁ ÓÅÒÄÃÅ ÅÊ ÌÅÇÌÉ.
ðÏÈÏÖÅ ÎÁ ÐÉ×ÎÏÊ ÂÏÞÅÎÏË
ïÔÌÉÞÅÎ ÂÙÌ íÕÄÉÝÅ× ìÅ×,
é ÇÒÕÄÅÊ ÏÂÎÁÖÉ× ÓÏÓËÉ
îÅ ×ÉÄÑ ÔÏÌËÕ ÕÖ ÎÉ × ËÏÍ,
é ÎÅ ÎÁÓÙÔÉÔ ×ÁÓ ÔÏÇÄÁ
îÏ ÍÎÅ ÎÁ×ÒÑÄ ÌÉ ÏÎ ÐÒÉÄÅÔÓÑ,
íÁÔÒÅÎÁ ÔÁÂÁËÕ ÎÀÈÎÕÌÁ,
îÏ ÔÏÌËÕ ÎÅÔ × ÎÅÊ ÎÉ ÈÕÑ
þÔÏ ÏÎ ÅÌÄÏÊ Ó×ÏÅÊ ÄÏ ÓÍÅÒÔÉ
é ÚÕÂ ÎÁ ÚÕÂ ÎÅ ÐÏÐÁÄÁÌ !
ëÕÐÞÉÈÁ ÇÏÓÔÑ ÄÏÒÏÇÏÇÏ
úÁÈÏÞÅÔ ÄÌÑ ÚÁÂÁ×Ù ÈÕÑ,



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [GENERAL] Question about accessing current row data inside trigger

2005-03-20 Thread peter Willis
Hello,
This issue is resolved.
I was using the wrong struct.
Peter
Tom Lane wrote:
peter Willis [EMAIL PROTECTED] writes:
 

I have a trigger function written in C.
...
   Since the trigger is called after each row update the actual row data
should be available in some way to the trigger.
   

Sure: tg_trigtuple or tg_newtuple depending on which state you want.
See
http://www.postgresql.org/docs/8.0/static/trigger-interface.html
regards, tom lane
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Question about accessing current row data inside trigger

2005-03-20 Thread peter Willis
Hello,
I resolved this issue already.
The trigger now works fine.
I was looking at the wrong structure.
Thanks,
Peter
Michael Fuhr wrote:
On Tue, Mar 08, 2005 at 11:37:14AM -0800, peter Willis wrote:
 

I have a trigger function written in C.
The trigger function is called via:
CREATE TRIGGER after_update AFTER UPDATE ON some_table
  FOR EACH ROW EXECUTE PROCEDURE  my_trigger_function();
  Since the trigger is called after each row update the actual row data
should be available in some way to the trigger.
  What functionality (SPI ?) do I use to use the column values from
the current row in the actual trigger?
   

See Writing Trigger Functions in C and C-Language Functions in
the documentation.  Here are links to documentation for the latest
version of PostgreSQL:
http://www.postgresql.org/docs/8.0/interactive/trigger-interface.html
http://www.postgresql.org/docs/8.0/interactive/xfunc-c.html
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Question about accessing current row data inside trigger

2005-03-13 Thread peter Willis
Hello,
I have a trigger function written in C.
The trigger function is called via:
CREATE TRIGGER after_update AFTER UPDATE ON some_table
   FOR EACH ROW EXECUTE PROCEDURE  my_trigger_function();
   Since the trigger is called after each row update the actual row data
should be available in some way to the trigger.
   What functionality (SPI ?) do I use to use the column values from
the current row in the actual trigger?
thanks for any insight,
Peter

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [Dnsmasq-discuss] question about limits of dnsmasg

2005-03-01 Thread Peter Willis
GrantC writes: 

On Tue, 01 Mar 2005 11:16:54 -0500, you wrote: 

Yes, it will work, almost exactly as you descibe actually (though there are 
better ways of going about it using a file separate from /etc/hosts). Read 
the /etc/dnsmasq.conf file for examples and further detail. (There is even 
an example that changes all domains matching doubleclick.net to 127.0.0.1, 
which when combined with apache and virtual hosting makes for a very simple 
yet effective ad blocker) 


OT: I have apache (1.3.?) setup with virtual hosting, what do you 
do to use it as an ad-blocker?  


http://psypete.hatethesystem.com/tips/ad_blocking/http_redirection.txt 



Eric S. Johansson writes:  

I'm very impressed by the capabilities of dnsmasg but I try to find out if 
what I want to can be done without going to full bind.  

the network here is a classic red/green/orange security zone. 



Eric S. Johansson also writes:


It seems to me that in order to get the behavior I want, I will need to 
tell dnsmasq to not use resolve.conf but instead use name servers 
specified by server.  then I can have resolv.conf pointed at local host. 

You may stop dnsmasq going 'outside' for localnet lookups by 
listing local machines in an additional hosts file.  From inside 
all machines visible, from outside only what you want is visible. 

Example: firewall web server may deliver NFS mount content from 
other localnet machines.  Mountpoints have 'offline' message when 
not in operation.  Haben't had second morning coffee yet :) 


Cheers,
Grant. 



___
Dnsmasq-discuss mailing list
Dnsmasq-discuss@lists.thekelleys.org.uk
http://lists.thekelleys.org.uk/mailman/listinfo/dnsmasq-discuss






[Full-Disclosure] Re: New whitepaper: Writing IA32 Restricted Instruction Set Shellcode Decoder Loops

2004-11-17 Thread Peter Willis
Hey, cool paper. Speaking of phrack, if in the future you have an 
article you think is print-worthy but is rejected by most zines, try 
sending it to Binary Revolution [EMAIL PROTECTED]. Although they're 
newer and have had some delays in getting new issues out, they're 
starting to re-focus on the magazine and the number of their supporters 
is growing. Sorry if this comes off a little advertisey, but hopefully 
if more people write in then BinRev can publish more original articles 
about vulnerabilities which can then make it back onto the web as sample 
articles.

Berend-Jan Wever wrote:
Hi all,
This one got rejected by phrack and I couldn't be arsed to rewrite it so it 
would make the next edition:
Writing IA32 Restricted Instruction Set Shellcode Decoder Loops by SkyLined
( http://www.edup.tudelft.nl/~bjwever/whitepaper_shellcode.html )
The article addresses the requirements for writing a shellcode decoder loop 
using a limited number of characters that limits our instruction set. Most of 
it is based on my experience with alphanumeric decoders but the principles 
apply to any piece of code that is written to work with a limited instruction 
set. (It's a continuation on rix's and obscou's work for phrack).
Comments and questions welcome, but I can not guarantee an answer to n00b 
questions.
Cheers,
SkyLined
http://www.edup.tudelft.nl/~bjwever
[EMAIL PROTECTED]
 

___
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


[GENERAL] How do I recover from pg_xlog/0000000000000000 (log file 0, segment 0) failed: No such file or directory

2004-10-19 Thread peter Willis
Hello,
Is there a way to recover from the following error?
I have (had) an existing database and wish not
to lose the data tables.
Thanks for any help,
Pete
[EMAIL PROTECTED] /]$ pg_ctl start
postmaster successfully started
[EMAIL PROTECTED] /]$ LOG:  database system shutdown was interrupted at 
2004-10-18 11:41:55 PDT
LOG:  open of /web2-disk1/grip/database/pg_xlog/ (log 
file 0, segment 0) failed: No such file or directory
LOG:  invalid primary checkpoint record
LOG:  open of /web2-disk1/grip/database/pg_xlog/ (log 
file 0, segment 0) failed: No such file or directory
LOG:  invalid secondary checkpoint record
PANIC:  unable to locate a valid checkpoint record
LOG:  startup process (pid 2803) was terminated by signal 6
LOG:  aborting startup due to startup process failure

[EMAIL PROTECTED] /]$

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[ADMIN] Installation problem with libreadline

2003-08-14 Thread Peter Willis
Hello,

I have seen a number of links via google describing build problems
(./configure)
with postgres and libreadline.so .

How do I go about resolving the undefined ncurses references in libreadline?
I have Mandrake Linux.

Thanks for the help

Peter



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html