Re: New JSON encoding method proposal for custom objects

2015-12-01 Thread Burak Arslan
hey,

On 11/30/15 14:35, cescu...@gmail.com wrote:
>
> Hello everyone and thank you for your interest!
>
> The Peter's code is very similar to what I think the default JSON encoder 
> should be.
>
> The advantage of the method that I propose is that you should not care 
> anymore about which encoder you're going to use even in case of different 
> class instances. Imagine if you could just do
>
>   json.dumps({[1,2,3], Obj(), [DifferentObj()] })
>
>

FWIW, Spyne can to the exact same thing -- i.e. serialize an object
given its definition to whatever format you want. (currently xml, json,
yaml and msgpack are supported). Here's a json example:

>>> from spyne import *
>>> from spyne.util.dictdoc import get_object_as_json, get_json_as_object
>>> get_object_as_json([1,2,3], Array(Integer))
'[1, 2, 3]'
>>>
>>> from datetime import datetime
>>>
>>> class SomeObject(ComplexModel):
... s = Unicode
... i = Integer
... dt = DateTime
...
>>> get_object_as_json(SomeObject(s='str', i=42, dt=datetime.now()),
complex_as=list)
'[42, "str", "2015-12-01T12:57:23.751631"]'
>>>
>>> get_json_as_object('[42, "str", "2015-12-01T12:55:21.777108"]',
SomeObject, complex_as=list)
SomeObject(i=42, s=u'str', dt=datetime.datetime(2015, 12, 1, 12, 55, 21,
777108))
>>>
>>>

More info: http://spyne.io

Best,
Burak

PS: The astute reader will notice that element order in SomeObject could
be totally random.

* In Python 3, we solve that by returning an odict() in the __prepare__
of the ComplexModel metaclass.
* In Python 2, we solve that by somehow getting hold of AST of the class
definition and deducing the order from there. Yes you read that right! I
know, it's horrible! Don't worry, it's turned off by default. We
recommend the workaround in [1] for Python 2. See [2] and [3] to see how
we integrated it.

[1]: http://spyne.io/docs/2.10/manual/03_types.html#complex
[2]: https://github.com/arskom/spyne/pull/313
[3]:
https://github.com/arskom/spyne/blob/2768c7ff0b5f58aa0e47859fcd69e5bb7aa31aba/spyne/util/meta.py

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to handle attachments passed via Postfix

2015-10-13 Thread Burak Arslan

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 10/13/15 00:52, Anthony Papillion wrote:
>> Check out the email.parser module, or the convenience function
>> > email.message_from_string - you should be able to get at the
>> > different parts (including attachments) from there.
>> >
> Many thanks! Checking it out now. Sounds like exactly what I'm looking
> for.

Also have a look at flanker (https://github.com/mailgun/flanker)
basically doing the same thing.

Here's why it exists:
https://github.com/mailgun/flanker/blob/master/docs/User%20Manual.md#rationale

Best,
Burak
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWHKzpAAoJEDzeDArrSnh8eKwQAJJ1odTXgJkVi4LgmxhN/CLR
jWdGCGdpIds1WNJ2iwcagF1N57oU3Lfq1xuTmF7GtCbRyL5/O9Tsjmge+xw8jzDQ
va/ge2keR2LTRlJd/XAYmQSx8ArGq/2K/hG/xNbzMjMhDo49ZQXenzGAZxCnMPcj
x5i1Op/PeKBNz/lTyHvkMPNubtc5XWaKo2U9QxaA151A/jNCJPNQ0jpezkjKdHlo
2PEYNgoqU5ioF+wQ6bHwk/QCgIYINO3Spm0MqE+49qVnOhDl/LabnT5ckpGMOn4N
mMPbSWIKNMOs+ulFScy2lV6yBFPi/eniZF4S7V1osxiPxu/N1X1fK92gljw31JQF
tkumMBV+ql2nuv288zPPJG3O8YUvUug/v2OJewIvjPr5eBabcwDYZHdoW8oPBBuU
DGVaJPynzUNvm1SaTE+/w7Baa19XIB+IqvYw42Ey4xoZQgr5T10h7ipwnpcNCWwy
7TQ5MdxhIeR1iiV9ofF2gxLWkE8bWJJ0eCBcOFvgXbUWbsdKsp+WqHHuXBjRedvI
1PGWJWBd411hpDnJc0vVVW5zqJqGr4MKkxEVh9K5T0RLcaUeXlQEMWWzl6n2/3KX
Nv9jNQ5T7raiBRgXt8/DXBLTreq9FzCxpRR3HMUeoZo2If2bkQ983tK2U8IxCxoE
hTQsYzrEFEiw37ypo9G2
=5gjA
-END PGP SIGNATURE-

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: XML Binding

2015-09-03 Thread Burak Arslan
Hello,

On 09/03/15 19:54, Palpandi wrote:
> Hi All,
>
> Is there any module available in python standard library for XML binding? If 
> not, any other suggestions.

lxml is the right xml library to use. You can use lxml's objectify or Spyne.

Here are some examples:

http://stackoverflow.com/questions/19545067/python-joining-and-writing-xml-etrees-trees-stored-in-a-list

> Which is good for parsing large file?
> 1. XML binding
> 2. Creating our own classes

If you're dealing with huge files, I suggest using just lxml and work
with raw data. Deserializing xml objects to python classes sure is nicer
but has performance overhead that gets more and more visible as the
amount of data you deal with grows.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: New to Programming - XML Processing

2015-04-01 Thread Burak Arslan
On 04/01/15 06:27, catperson wrote:
 I am new to programming, though not new to computers.  I'm looking to
 teach myself Python 3 and am working my way through a tutorial.  At
 the point I'm at in the tutorial I am tasked with parsing out an XML
 file created with a Garmin Forerunner and am just having a terrible
 time getting my head around the concepts.  What I'm looking for is
 some suggested reading that might give me some of the theory of
 operation behind ElementTree and then how to parse out specific
 elements.  Most of what I have been able to find in examples that I
 can understand use very simplistic XML files and this Garmin file is
 many levels of sub-elements and some of those elements have attributes
 assigned, like Activity Sport=Running.

As everybody loves objects, there are many libraries that parse xml
documents to regular python objects (with proper hierarchy). These are
supposed to save you from dealing with ElementTree api directly.

Have a look here:

http://stackoverflow.com/questions/19545067/python-joining-and-writing-xml-etrees-trees-stored-in-a-list

Hth,
Burak


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Own network protocol

2014-12-29 Thread Burak Arslan
Hi,


On 12/29/14 10:18, pfranke...@gmail.com wrote:
 Hello Steven!

 Thank you for your answer!

 RPyC indeed looks great! I need to deep dive into the API reference, but I 
 think its capabilities will suffice to do what I want. Do you know whether 
 non-python related clients can work with this as well, i.e. are their 
 corresponding tools on the android/iphone side? Just to know whether they 
 could serve as clients as well.


If you need more than inter-process messaging, you can use Spyne, which
uses standard protocols instead of inventing its own.

I suggest using TwistedMessagePack transport to wrap messages and
MessagePack protocol for the actual content. Spyne also supports some
declarative validation so it can be used as a public daemon.

Disclaimer: I'm the author of Spyne.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to generate a wsdl file for a web service in python

2014-12-21 Thread Burak Arslan

On 12/19/14 12:45, brice DORA wrote:
 i have already my python file which contains all methods of my web service. 
 so do you give a example or tell me how i can do it...

No, all you need is there in that example.

You need to decorate your functions using Spyne's @rpc, denote
input/output types using Spyne's type markers, wrap it inside a subclass
of Spyne's ServiceBase, pass the resulting service class to a Spyne
application setting your input/output protocols (you probably want Soap)
and pass that application to a transport of your choice (you probably
want WsgiApplication). If you need a Wsgi server, use CherryPy.

Bon courage,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to generate a wsdl file for a web service in python

2014-12-18 Thread Burak Arslan

On 12/18/14 11:58, brice DORA wrote:
 hi to all I am new to python and as part of my project I would like to create 
 a SOAP web service. for now I've developed my python file with all the 
 methods of my future web service, but my problem now is how to generate the 
 wsdl file ... my concern may seem to move so bear with me because I am a 
 novice in this field. thank you in advance


Hi,

You can use Spyne to generate the wsdl file.

http://spyne.io/docs/2.10/manual/02_helloworld.html
https://github.com/arskom/spyne/blob/master/examples/helloworld_soap.py

There's also s...@python.org for your soap-specific questions.

best,
burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [ANN] PyUBL

2014-11-27 Thread Burak Arslan

On 11/26/14 08:53, dieter wrote:
 Burak Arslan burak.ars...@arskom.com.tr writes:
 We've gone through the grunt work of researching and integrating
 XMLDSIG, XAdES and UBL schemas and its various extensions and
 dependencies and wrote a bunch of scripts that map these documents to
 python objects.
 In this context, I would like to mention PyXB
 (https://pypi.python.org/pypi/PyXB;). It is a general mapper
 for XML-Schema.

Well, Spyne is a general-purpose object mapper. It maps python objects
to xml documents and back using a given xml schema as well as to json,
yaml, etc. etc. and also to relational databases (via sqlalchemy).

I didn't know PyXB and I'm sure its way better than Spyne at parsing
esoteric Xml Schema documents, but in this case Spyne seems to be doing
a good enough job.

Best,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


[ANN] PyUBL

2014-11-25 Thread Burak Arslan
All,

We've gone through the grunt work of researching and integrating
XMLDSIG, XAdES and UBL schemas and its various extensions and
dependencies and wrote a bunch of scripts that map these documents to
python objects.

UBL stands for Universal Business Language. It's an OASIS standard that
defines entities that every business (e.g. invoices, with optional
cryptographic signature and/or encryption) should be able to handle.
XMLDSIG and XAdES standards define cryptographic primitives for XML
documents.

The repository is made available under the three-clause BSD license
here: https://github.com/arskom/pyubl

We hope you find it useful and welcome any collaboration and feedback on
this subject.

Best regards,
Burak


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python vs C++

2014-08-21 Thread Burak Arslan

On 08/21/14 15:54, David Palao wrote:
 But I'm interested in a genuine
 C++ project: some task where C++ is really THE language (and where
 python is actually a bad ab initio choice)

For my day job, I chose Qt on C++ for a classic desktop app that needs
to be deployed on Windows (among other platforms) with an installation
package that is as small as possible.

All I need to do deployment-wise is to create an NSIS script putting a
couple of DLL's and my executable in a folder in %ProgramFiles% and a
shortcutin start menu. The full package is 5 megs and the update archive
is pushing half a megabyte.

I was also back to C++ after a number of years of exclusive web dev with
Python and Javascript. C++11 is just *sweet*, I'd never imagined I'd
enjoy doing non-computer-sciencey work with C++.

good luck,
burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Distributing python applications as a zip file

2014-07-23 Thread Burak Arslan

On 07/23/14 07:23, Steven D'Aprano wrote:
 A little known feature of Python: you can wrap your Python application in 
 a zip file and distribute it as a single file. The trick to make it 
 runnable is to put your main function inside a file called __main__.py 
 inside the zip file. Here's a basic example:

 steve@runes:~$ cat __main__.py 
 print(NOBODY expects the Spanish Inquisition!!!)

 steve@runes:~$ zip appl __main__.py 
   adding: __main__.py (stored 0%)
 steve@runes:~$ rm __main__.py 
 steve@runes:~$ python appl.zip 
 NOBODY expects the Spanish Inquisition!!!



does it support package_data? or more specifically, does
pkg_resources.resource_* detect that the script is running from a zip
file and adjust accordingly?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Burak Arslan

On 06/03/14 12:30, Chris Angelico wrote:
 Write me a purely nonblocking
 web site concept that can handle a million concurrent connections,
 where each one requires one query against the database, and one in a
 hundred of them require five queries which happen atomically.


I don't see why that can't be done. Twisted has everyting I can think of
except database bits (adb runs on threads), and I got txpostgres[1]
running in production, it seems quite robust so far. what else are we
missing?

[1]: https://pypi.python.org/pypi/txpostgres

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-03 Thread Burak Arslan

On 03/06/14 14:57, Chris Angelico wrote:

On Tue, Jun 3, 2014 at 9:05 PM, Burak Arslan burak.ars...@arskom.com.tr wrote:

On 06/03/14 12:30, Chris Angelico wrote:

Write me a purely nonblocking
web site concept that can handle a million concurrent connections,
where each one requires one query against the database, and one in a
hundred of them require five queries which happen atomically.


I don't see why that can't be done. Twisted has everyting I can think of
except database bits (adb runs on threads), and I got txpostgres[1]
running in production, it seems quite robust so far. what else are we
missing?

[1]: https://pypi.python.org/pypi/txpostgres

I never said it can't be done. My objection was to Marko's reiterated
statement that asynchronous coding is somehow massively cleaner than
threading; my argument is that threading is often significantly
cleaner than async, and that at worst, they're about the same (because
they're dealing with exactly the same problems).


Ah ok. Well, a couple of years of writing async code, my 
not-so-objective opinion about it is that it forces you to split your 
code into functions, just like Python forces you to indent your code 
properly. This in turn generally helps the quality of the codebase.


If you manage to keep yourself out of the closure hell by not writing 
more and more functions inside one another, I say async code and 
(non-sloppy) blocking code looks almost the same. (which means, I guess, 
that we mostly agree :))


Burak
--
https://mail.python.org/mailman/listinfo/python-list


Re: Benefits of asyncio

2014-06-02 Thread Burak Arslan

On 06/02/14 20:40, Aseem Bansal wrote:
 I read in these groups that asyncio is a great addition to Python 3. I have 
 looked around and saw the related PEP which is quite big BTW but couldn't 
 find a simple explanation for why this is such a great addition. Any simple 
 example where it can be used? 

AFAIR, Guido's US Pycon 2013 keynote is where he introduced asyncio (or
tulip, which is the internal codename of the project) so you can watch
it to get a good idea about his motivations.

So what is Asyncio? In a nutshell, Asyncio is Python's standard event
loop. Next time you're going to build an async framework, you should
build on it instead of reimplementing it using system calls available on
the platform(s) that you're targeting, like select() or epoll().

It's great because 1) Creating an abstraction over Windows and Unix way
of event-driven programming is not trivial, 2) It makes use of yield
from, a feature available in Python 3.3 and up. Using yield from is
arguably the cleanest way of doing async as it makes async code look
like blocking code which seemingly makes it easier to reason about the
flow of your logic.

The idea is very similar to twisted's @inlineCallbacks, if you're
familiar with it.

If doing lower level programming with Python is not your cup of tea, you
don't really care about asyncio. You should instead wait until your
favourite async framework switches to it.



 It can be used to have a queue of tasks? Like threads? Maybe light weight 
 threads? Those were my thoughts but the library reference clearly stated that 
 this is single-threaded. So there should be some waiting time in between the 
 tasks. Then what is good?

You can use it to implement a queue of (mostly i/o bound) tasks. You are
not supposed to use it in cases where you'd use threads or lightweight
threads (or green threads, as in gevent or stackless).

Gevent is also technically async but gevent and asyncio differ in a very
subtle way: Gevent does cooperative multitasking whereas Asyncio (and
twisted) does event driven programming.

The difference is that with asyncio, you know exactly when you're
switching to another task -- only when you use yield from. This is not
always explicit with gevent, as a function that you're calling can
switch to another task without letting your code know.

So with gevent, you still need to take the usual precautions of
multithreaded programming. Gevent actually simulates threads by doing
task switching (or thread scheduling, if you will) in userspace. Here's
its secret sauce:
https://github.com/python-greenlet/greenlet/tree/master/platform

There's some scary platform-dependent assembly code in there! I'd think
twice before seriously relying on it.

Event driven programming does not need such dark magic. You also don't
need to be so careful in a purely event-driven setting as you know that
at any point in time only one task context can be active. It's like you
have an implicit, zero-overhead LOCK ALL for all nonlocal state.

Of course the tradeoff is that you should carefully avoid blocking the
event loop. It's not that hard once you get the hang of it :)

So, I hope this answers your questions. Let me know if I missed something.

Best regards,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Suds and Python3

2014-06-01 Thread Burak Arslan
Hello,

First, for such questions, there's always s...@python.org

On 31/05/14 21:59, Paul McNett wrote:
 On 5/31/14, 11:36 AM, tokib...@gmail.com wrote:
 Suds is defacto python SOAP client, but it does not mainte recent few
 years. Why?


The original authors don't seem to care anymore. If you search PyPi
you'll see that there are many suds forks as a result. See e.g.
https://pypi.python.org/pypi/suds-jurko/0.6

This was a popular topic during past month:
https://mail.python.org/pipermail/soap/2014-May/thread.html

 Is it really the defacto? It seems like I've heard more about
 pysimplesoap, and looking at GitHub there have been commits in the
 past 4 days.

Yes, suds is really the de facto soap client for python. I'd even
dropped the soap client in Spyne years ago in favor of suds. Seeing
suds' current situation though, I'm more and more tempted to sit home
one weekend and bring it back.

 In general, SOAP has been falling out of favor over the past half
 decade at least because of its relative heaviness next to, e.g.
 RESTful web services usually using JSON instead of XML. Way, way
 simpler and more fun to do.


Xml also has its strengths. Especially compared to json, which only
supports 6 types: string, number, dict, list, boolean (true/false) and
null. Json gets hairy very fast even when you try to do seemingly simple
things like serializing arbitrary precision decimals.

 And from what I can tell without actually trying any of them,
 pysimplesoap feels like the best option currently. 

Not really, there are other options. See the discussions in the above link.

Best,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Verify JSON Data

2014-05-26 Thread Burak Arslan

On 26/05/14 16:26, gaurangns...@gmail.com wrote:

Hi Guys,

Would someone let me know how to verify JSON data in python. There are so many 
modules available to verify XML file, however i didn't find any good module to 
verify JSON Data.



Hi,

Spyne re-implements (a useful subset of) Xml Schema validation so that 
it can be applied to other document formats like json. It's 'soft' 
validation in Spyne's terms.


http://spyne.io

Disclosure: I'm the author of Spyne and starting to feel like I'm 
talking a little bit too much about my project on this list :)


Hth,
Burak

--
https://mail.python.org/mailman/listinfo/python-list


Re: Code a web service with python/postgis

2014-05-21 Thread Burak Arslan

On 05/20/14 21:10, lcel...@latitude-geosystems.com wrote:
 Dear all,

 I would like code a web service with python. I have already imported
 several vector data
 (land cover) and one Digital Elevation Model (raster layer) into my
 postgresql/postgis
 database (server side).

 I succeed in connecting to my pg db via psycopg2.

 Client side, operators use a client application (Developed with PHP /
 javascript /
 openlayers).

 Objectives :  Client side, once the layer would be selected,and once
 the operators have
 clicked on the map  , they would like that usefull informations appear
 on the interface
 of the client application(kind of land cover, z of the DEM).


 = Regarding my python script, i have to type a SQL query in order to
 select usefull
 informations of the db layers. And, of course, the information must 
 depend on geographic
 coordinates (Latitude Y/Longitute X).
 In a 2nd time, my script must  produce a result(JSon type) for the
 client side.


Hi,

Spyne supports Point, Line, Polygon and their Multi* variants. e.g.:
http://spyne.io/docs/2.10/reference/model/primitive.html#spyne.model.primitive.Line

This means it can validate WKT input and produce WKT output.

It also includes SQLAlchemy adapters for these types. This means you
don't need to use  GeoAlchemy if you have Spyne.

Finally, Spyne can return your data in Soap, Xml, Json, Yaml, MsgPack,
html table, html microformat, etc. etc. or you can implement your
protocols if you don't like the ones already provided.

Spyne web site: http://spyne.io
If you have further questions you can use Spyne tag in stackoverflow or
http://lists.spyne.io/listinfo/people

Disclaimer: I'm the author of Spyne. I already have GIS projects based
on Spyne in production.

Best regards,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python CGI

2014-05-19 Thread Burak Arslan

On 05/19/14 21:32, Christian wrote:
 Hi,

 I'd like to use Python for CGI-Scripts. Is there a manual how to setup
 Python with Fast-CGI?

Look for Mailman fastcgi guides.

Here's one for gentoo, but I imagine it'd be easily applicable to other
disros:
https://www.rfc1149.net/blog/2010/12/30/configuring-mailman-with-nginx-on-gentoo/

hth,
burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: parsing multiple root element XML into text

2014-05-09 Thread Burak Arslan

On 05/09/14 16:55, Stefan Behnel wrote:
 ElementTree has gained a nice API in
 Py3.4 that supports this in a much saner way than SAX, using iterators.
 Basically, you just dump in some data that you received and get back an
 iterator over the elements (and their subtrees) that it generated from it.
 Intercept on the right top elements and you get your next subtree as soon
 as it's ready.


Hi Stefan,

Here's a small script:

events = etree.iterparse(istr, events=(start, end))
stack = deque()
for event, element in events:
if event == start:
stack.append(element)
elif event == end:
stack.pop()
 
if len(stack) == 0:
break
 
print(istr.tell(), %5s, %4s, %s % (event, element.tag, element.text))

where istr is an input-stream. (Fully working example:
https://gist.github.com/plq/025005a71e8135c46800)

I was expecting to have istr.tell() return the position where the first
root element ends, which would make it possible to continue parsing with
another call to etree.iterparse(). But istr.tell() returns the position
of EOF after the first call to next() on the iterator it returns.
Without the stack check, the loop eventually throws an exception and the
offset value in that exception is None.

So I'm lost here, how it'd possible to parse OP's document with lxml?

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Add Received: header to email msg in correct position?

2014-05-07 Thread Burak Arslan

On 05/06/14 18:26, Grant Edwards wrote:
 On 2014-05-06, Burak Arslan burak.ars...@arskom.com.tr wrote:
 On 05/06/14 12:47, Chris Angelico wrote:
 On Tue, May 6, 2014 at 7:15 PM, alister
 alister.nospam.w...@ntlworld.com wrote:
 On Mon, 05 May 2014 19:51:15 +, Grant Edwards wrote:

 I'm working on a Python app that receives an e-mail message via SMTP,
 does some trivial processing on it, and forwards it to another SMTP
 server.

 I'd like to do the polite thing and add a Received: header, but I
 can't figure out how to get Python's email module to add it in the
 correct place.  It always ends up at the bottom of the headers below
 From: To: etc.  It's supposed to go at the above all the Received:
 headers that where there when I received it.
 Is this required or just being polite?
 what I mean is does the standard state the headers must be in a
 particular order or can they appear anywhere, you may be spending time
 trying to resolve an issue that does not need fixing.
 Yes, it's required. RFC 2821 [1] section 3.8.2 says prepend.
 The rationale for prepend is to make it possible for MTAs to add
 their Received: headers to messages without having to parse them.

 So you're supposed to do the same: Just write your Received header,
 followed by '\r\n', followed by the rest of the message to the socket
 and you should be fine.
 I need to check and manipulate other headers for other reasons, so I'm
 using the email module for that.  In order to keep things consistent
 and easy to understand, I'd like to use the email module to prepend
 the Received header as well.  That keeps my application from having to
 have any knowledge about e-mail message formatting.


Seeing how discussion is still going on about this, I'd like to state
once more what I said above in other words: You just need to do this:

Received: blah\r\n + message.to_string()

or better:

socket.write(Received: blah\r\n)
socket.write(message.to_string())

And again, this is not a hack, this is how it's supposed to work.

Burak


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Add Received: header to email msg in correct position?

2014-05-06 Thread Burak Arslan

On 05/06/14 12:47, Chris Angelico wrote:
 On Tue, May 6, 2014 at 7:15 PM, alister
 alister.nospam.w...@ntlworld.com wrote:
 On Mon, 05 May 2014 19:51:15 +, Grant Edwards wrote:

 I'm working on a Python app that receives an e-mail message via SMTP,
 does some trivial processing on it, and forwards it to another SMTP
 server.

 I'd like to do the polite thing and add a Received: header, but I
 can't figure out how to get Python's email module to add it in the
 correct place.  It always ends up at the bottom of the headers below
 From: To: etc.  It's supposed to go at the above all the Received:
 headers that where there when I received it.
 Is this required or just being polite?
 what I mean is does the standard state the headers must be in a
 particular order or can they appear anywhere, you may be spending time
 trying to resolve an issue that does not need fixing.
 Yes, it's required. RFC 2821 [1] section 3.8.2 says prepend.



The rationale for prepend is to make it possible for MTAs to add their
Received: headers to messages without having to parse them.

So you're supposed to do the same: Just write your Received header,
followed by '\r\n', followed by the rest of the message to the socket
and you should be fine.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Soap list and soap users on this list

2014-04-28 Thread Burak Arslan

Hi Joseph,

Sorry for the late response, I seem to have missed this post.

On 04/17/14 21:34, Joseph L. Casale wrote:
 I've been looking at Spyne to produce a service that
 can accept a request formatted as follows:

 ?xml version='1.0' encoding='UTF-8'?
 SOAP-ENV:Envelope xmlns:SOAP-ENV=http://...; xmlns:xsi=http:/... 
 xmlns:xsd=http://...;
 SOAP-ENV:Body
 modifyRequest returnData=everything xmlns=urn:...
   attr ID=.../
   data
   /data
 /modifyRequest
   /SOAP-ENV:Body
 /SOAP-ENV:Envelope

https://gist.github.com/plq/11384113

Unfortunately, you need the latest Spyne from
https://github.com/arskom/spyne, this doesn't work with 2.10

2.11 is due around end of may, beginning of june.

Ping back if you got any other questions.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Wikipedia XML Dump

2014-01-28 Thread Burak Arslan
hi,

On 01/29/14 00:31, Kevin Glover wrote:
 Thanks for the comments, guys. The Wikipedia download is a single XML 
 document, 43.1GB. Any further thoughts?



in that case, http://lxml.de/tutorial.html#event-driven-parsing seems to
be your only option.

hth,
burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python declarative

2014-01-24 Thread Burak Arslan
On 01/24/14 11:21, Frank Millman wrote:
 I store database metadata in the database itself. I have a table that 
 defines each table in the database, and I have a table that defines each 
 column. Column definitions include information such as data type, allow 
 null, allow amend, maximum length, etc. Some columns require that the value 
 is constrained to a subset of allowable values (e.g. 'title' must be one of 
 'Mr', 'Mrs', etc.). I know that this can be handled by a 'check' constraint, 
 but I have added some additional features which can't be handled by that. So 
 I have a column in my column-definition table called 'choices', which 
 contains a JSON'd list of allowable choices. It is actually more complex 
 than that, but this will suffice for a simple example.


I wonder whether your use cases can be fully handled by Xml Schema
standard. It's quite robust and easy to use. You are already doing
validation at the app level so I don't think it'd be much of a problem
for you to adopt it.

e.g.

 from spyne.model import *
 class C(ComplexModel):
... title = Unicode(values=['Mr', 'Mrs', 'Ms'])
...
 from lxml import etree
 from spyne.util.xml import get_validation_schema
 schema = get_validation_schema([C], 'some_ns')
 doc1 = etree.fromstring('C xmlns=some_nstitleMr/title/C')
 print schema.validate(doc1)
True
 doc2 = etree.fromstring('C xmlns=some_nstitlexyz/title/C')
 print schema.validate(doc2)
False
 print schema.error_log
string:1:0:ERROR:SCHEMASV:SCHEMAV_CVC_ENUMERATION_VALID: Element
'{some_ns}title': [facet 'enumeration'] The value 'xyz' is not an
element of the set {'Mr', 'Mrs', 'Ms'}.
string:1:0:ERROR:SCHEMASV:SCHEMAV_CVC_DATATYPE_VALID_1_2_1: Element
'{some_ns}title': 'xyz' is not a valid value of the atomic type
'{some_ns}C_titleType'.



Also, if you need conversion between various serialization formats and
Python object hierarchies, I put together an example for you:
https://gist.github.com/plq/8596519

I hope these help.

Best,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-29 Thread Burak Arslan
On 12/29/13 07:06, Ian Kelly wrote:
 On Sat, Dec 28, 2013 at 5:35 PM, Burak Arslan
 burak.ars...@arskom.com.tr wrote:
 On 12/29/13 00:13, Burak Arslan wrote:
 Hi,

 Have a look at the following code snippets:
 https://gist.github.com/plq/8164035

 Observations:

 output2: I can break out of outer context without closing the inner one
 in Python 2
 output3: Breaking out of outer context closes the inner one, but the
 closing order is wrong.
 output3-yf: With yield from, the closing order is fine but yield returns
 None before throwing.
 It doesn't, my mistake. Python 3 yield from case does the right thing, I
 updated the gist. The other two cases still seem weird to me though. I
 also added a possible fix for python 2 behaviour in a separate script,
 though I'm not sure that the best way of implementing poor man's yield from.
 I don't see any problems here.  The context managers in question are
 created in separate coroutines and stored on separate stacks, so there
 is no inner and outer context in the thread that you posted.  I
 don't believe that they are guaranteed to be called in any particular
 order in this case, nor do I think they should be.

First, Python 2 and Python 3 are doing two separate things here: Python
2 doesn't destroy an orphaned generator and waits until the end of the
execution. The point of having raw_input at the end is to illustrate
this. I'm tempted to call this a memory leak bug, especially after
seeing that Python 3 doesn't behave the same way.

As for the destruction order, I don't agree that destruction order of
contexts should be arbitrary. Triggering the destruction of a suspended
stack should first make sure that any allocated objects get destroyed
*before* destroying the parent object. But then, I can think of all
sorts of reasons why this guarantee could be tricky to implement, so I
can live with this fact if it's properly documented. We should just use
'yield from' anyway.


 For example, the first generator could yield the second generator back
 to its caller and then exit, in which case the second generator would
 still be active while the context manager in the first generator would
 already have done its clean-up.

Sure, if you pass the inner generator back to the caller of the outer
one, the inner one should survive. The refcount of the inner is not zero
yet. That's doesn't have much to do with what I'm trying to illustrate
here though.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


outsmarting context managers with coroutines

2013-12-28 Thread Burak Arslan
Hi,

Have a look at the following code snippets:
https://gist.github.com/plq/8164035

Observations:

output2: I can break out of outer context without closing the inner one
in Python 2
output3: Breaking out of outer context closes the inner one, but the
closing order is wrong.
output3-yf: With yield from, the closing order is fine but yield returns
None before throwing.

All of the above seem buggy in their own way. And actually, Python 2
seems to leak memory when generators and context managers are used this way.

Are these behaviours intentional? How much of it is
implementation-dependent? Are they documented somewhere? Neither PEP-342
nor PEP-380 talk about context managers and PEP-343 talks about
generators but not coroutines.

My humble opinion:

1) All three should behave in the exact same way.
2) Throwing into a generator should not yield None before throwing.

Best,
Burak

ps: I have:

$ python -V; python3 -V
Python 2.7.5
Python 3.3.2
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-28 Thread Burak Arslan
On 12/29/13 00:13, Burak Arslan wrote:
 Hi,

 Have a look at the following code snippets:
 https://gist.github.com/plq/8164035

 Observations:

 output2: I can break out of outer context without closing the inner one
 in Python 2
 output3: Breaking out of outer context closes the inner one, but the
 closing order is wrong.
 output3-yf: With yield from, the closing order is fine but yield returns
 None before throwing.

It doesn't, my mistake. Python 3 yield from case does the right thing, I
updated the gist. The other two cases still seem weird to me though. I
also added a possible fix for python 2 behaviour in a separate script,
though I'm not sure that the best way of implementing poor man's yield from.

Sorry for the noise.

Best,
Burak

 All of the above seem buggy in their own way. And actually, Python 2
 seems to leak memory when generators and context managers are used this way.

 Are these behaviours intentional? How much of it is
 implementation-dependent? Are they documented somewhere? Neither PEP-342
 nor PEP-380 talk about context managers and PEP-343 talks about
 generators but not coroutines.

 My humble opinion:

 1) All three should behave in the exact same way.
 2) Throwing into a generator should not yield None before throwing.

 Best,
 Burak

 ps: I have:

 $ python -V; python3 -V
 Python 2.7.5
 Python 3.3.2

-- 
https://mail.python.org/mailman/listinfo/python-list


Bootstrapping a test environment

2013-12-17 Thread Burak Arslan
Hello list,

I decided to set up a portable Jenkins environment for an open source
project I'm working on.

After a couple of hours of tinkering, I ended up with this:

https://github.com/arskom/spyne/blob/05f7a08489e6dc04a3b5659eb325390bea13b2ff/run_tests.sh
(it should have been a Makefile)

This worked just fine.

Next, I wanted to integrate other Python implementations to this setup,
but none really worked so far. Jython-2.7-beta installs fine but seems
to have problems fetching stuff from the internet. I couldn't get
IronPython to compile under mono at all. I couldn't look at PyPy deeply
but resource requirements for building it seem scary...

Here's the latest version of that script:
https://github.com/arskom/spyne/blob/master/run_tests.sh

Note that there's nothing project-specific there except the --source and
--omit parameters to coverage.

Here's the script in action: https://spyne.ci.cloudbees.com/. Btw, Kudos
to cool folks at cloudbees for supporting FOSS, It's been rock solid so
far. They got unlimited parallel executors, no less :)

So, could people who've done similar things share their experience? How
do you suggest I should proceed here? Do you have a working ipy/mono
version combination that works? Is there a similar build script for PyPy
somewhere?

Pull requests are very much welcome :)

Best regards,
Burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Try-except for flow control in reading Sqlite

2013-10-28 Thread Burak Arslan
On 10/28/13 05:43, Victor Hooi wrote:
 Hi,

 I'd like to double-check something regarding using try-except for controlling 
 flow.

 I have a script that needs to lookup things in a SQLite database.

 If the SQLite database file doesn't exist, I'd like to create an empty 
 database, and then setup the schema.

 Is it acceptable to use try-except in order to achieve this? E.g.:

 try:
 # Try to open up the SQLite file, and lookup the required entries
 except OSError:
 # Open an empty SQLite file, and create the schema


this doesn't protect against a partially-created schema. do you have
something like a version table in your database as the last created
table? you can check for its existence in the except block and if it's
not there, you should remove the file and re-create it.

to get a list of tables:

select * from sqlite_master where type='table';

best,
burak

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: lxml question -- creating an etree.Element attribute with ':' in the name

2013-09-18 Thread Burak Arslan
On 09/18/13 21:59, Roy Smith wrote:
 I can create an Element with a 'foo' attribute by doing:

 etree.Element('my_node_name', foo=spam)

 But, how do I handle something like:

 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;, since xmlns:xsi 
 isn't a valid python identifier?



xmlns: is a prefix with a special meaning: it defines an xml namespaces
prefix. you should read about how they work.

The following:

Element('{http://www.w3.org/2001/XMLSchema-instance}my_node_name')

will generate a proper xmlns declaration for you. It may not be the same
every time, but it will do the job just as well.

btw, if you need to generate xml schemas, have a look at spyne:
http://spyne.io

Specifically:
https://github.com/arskom/spyne/blob/master/examples/xml/schema.py

best,
burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Language design

2013-09-11 Thread Burak Arslan
On 09/10/13 09:09, Steven D'Aprano wrote:
 What design mistakes, traps or gotchas do you think Python has? 

My favourite gotcha is this:

elt, = elts

It's a nice and compact way to do both:

assert len(elts) == 0
elt = elts[0]

but it sure looks strange at first sight. As a bonus, it works on any
iterable, not just ones that support __getitem__.

burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Language design

2013-09-11 Thread Burak Arslan
On 09/11/13 17:52, Ethan Furman wrote:
 On 09/11/2013 03:38 AM, Burak Arslan wrote:
 On 09/10/13 09:09, Steven D'Aprano wrote:
 What design mistakes, traps or gotchas do you think Python has?

 My favourite gotcha is this:

  elt, = elts

 It's a nice and compact way to do both:

  assert len(elts) == 0

 Perhaps you meant 'assert len(elts) == 1' ?


yes :)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Newbie: static typing?

2013-08-06 Thread Burak Arslan
On 08/06/13 13:12, Rui Maciel wrote:
 Joshua Landau wrote:

 What's the actual problem you're facing? Where do you feel that you
 need to verify types?
 A standard case would be when there's a function which is designed expecting 
 that all operands support a specific interface or contain specific 
 attributes.

 In other words, when passing an unsupported type causes problems.


Hi,

First, let's get over the fact that, with dynamic typing, code fails at
runtime. Irrespective of language, you just shouldn't ship untested
code, so I say that's not an argument against dynamic typing.

This behaviour is only a problem when code fails *too late* into the
runtime -- i.e. when you don't see the offending value in the stack trace.

For example, consider you append values to a list and the values in that
list get processed somewhere else. If your code fails because of an
invalid value, your stack trace is useless, because that value should
not be there in the first place. The code should fail when appending to
that list and not when processing it.

The too late case is a bit tough to illustrate. This could be a rough
example: https://gist.github.com/plq/6163839 Imagine that the list there
is progressively constructed somewhere else in the code and later
processed by the sq_all function. As you can see, the stack trace is
pretty useless as we don't see how that value got there.

In such cases, you do need manual type checking.

Yet, as someone else noted, naively using isinstance() for type checking
breaks duck typing. So you should read up on abstract base classes:
http://docs.python.org/2/glossary.html#term-abstract-base-class

These said, I've been writing Python for several years now, and I only
needed to resort to this technique only once. (i was working on a
compiler) Most of the time, you'll be just fine without any manual type
checking.

Best regards,
Burak


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Working with XML/XSD

2013-08-06 Thread Burak Arslan
On 08/06/13 01:56, David Barroso wrote:
 Hello,
 I was wondering if someone could point me in the right direction. I
 would like to develop some scripts to manage Cisco routers and
 switches using XML. However, I am not sure where to start. Does
 someone have some experience working with XML, Schemas and things like
 that? Which libraries do you use? Do you know of any good tutorial?


Hi,

I develop Spyne (http://spyne.io), it does let you define Xml Schema
types, generate the Schema documents, (and in the upcoming release,
parse them) and also does both serialization and deserialization of
python objects from and to xml according to definitions in the xml
schema. It also does RPC :)

In case you don't want to use a framework, use lxml, it's a very good
xml manipulation library based on libxml2/libxslt. You can use
lxml.objectify for xml serialization as well.

Best,
Burak

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generating HTML

2013-07-29 Thread Burak Arslan
Hi,

On 07/29/13 14:41, Morten Guldager wrote:
 Something like:
   table_struct = ['table', ['tr', ['td', {class=red}, this is
 red],['td', {class=blue}, this is not red]]]
   html = struct2html(table_struct)

 Suggestions?


See: http://lxml.de/lxmlhtml.html#creating-html-with-the-e-factory


Python 2.7.5 (default, Jul 16 2013, 00:38:32)
[GCC 4.6.3] on linux2
Type help, copyright, credits or license for more information.
 from lxml.html.builder import E
 e = E.html(
... E.head(
... E.title(Sample Html Page)
... ),
... E.body(
... E.div(
... E.h1(Hello World),
... E.p(
... lxml is quite nice when it comes to generating HTML 
... from scratch
... ),
... id=container,
... )
... )
... )
 from lxml.html import tostring
 print tostring(e, pretty_print=True)
html
headtitleSample Html Page/title/head
bodydiv id=container
h1Hello World/h1
plxml is quite nice when it comes to generating HTML from scratch/p
/div/body
/html


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python - remote object protocols and security

2013-07-15 Thread Burak Arslan

Hi,

On 07/15/13 13:30, Chris Angelico wrote:
 On Mon, Jul 15, 2013 at 10:26 PM, Jean-Michel Pichavant
 jeanmic...@sequans.com wrote:
 Basically, I need to transfer numbers (int). Possibly dictionaries like 
 {string: int} in order to structure things a little bit.
 I strongly recommend JSON, then. It's a well-known system, it's
 compact, it's secure, and Python comes with a json module.


Especially for numbers, MessagePack is more efficient. Its API is
identical to Json, so it's almost a drop-in replacement.

A project that I've been working on, Spyne, is designed to implement
public RPC services. It supports both Json and MessagePack. Here's the
json example:
http://spyne.io/#inprot=JsonDocumentoutprot=JsonDocuments=rpctpt=WsgiApplicationvalidator=true

If you choose to use MessagePack, you must HTTP POST the MessagePack
document the same way you'd POST the json document.

Best regards,
Burak

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python - remote object protocols and security

2013-07-15 Thread Burak Arslan
On 07/15/13 13:51, Chris Angelico wrote:
 So the only bit you still need is: How do you transmit this across the
 network? Since it's now all just bytes, that's easy enough to do, eg
 with TCP. But that depends on the rest of your system, and is a quite
 separate question - and quite probably one you already have the answer
 to.

For Json, you need to have a way of delimiting messages -- to my
knowledge, Python's json library does not support parsing streams.

You can send the json document in the body of a Http POST, or a ZeroMQ
message, or in a UDP datagram (if you can guarantee it fits inside one)
or in a simple TCP-based encapsulation mechanism that e.g. prepends the
length of the message to the document.

e.g.

'\x00\x00\x00\x07{a:1}'

As MessagePack already does this, you can send MessagePack documents via
an ordinary TCP socket and easily recover them on the other side of the
pipe.

 import msgpack; from StringIO import StringIO
 s = StringIO(msgpack.dumps({a:1}) + msgpack.dumps({b:2}))
 for doc in msgpack.Unpacker(s):
... print doc
...
{'a': 1}
{'b': 2}

This won't work with Json:

 import json; from StringIO import StringIO
 s = StringIO(json.dumps({a:1}) + json.dumps({b:2}))
 for doc in json.load(s): # or whatever ???
... print doc
...
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib64/python2.7/json/__init__.py, line 290, in load
**kw)
  File /usr/lib64/python2.7/json/__init__.py, line 338, in loads
return _default_decoder.decode(s)
  File /usr/lib64/python2.7/json/decoder.py, line 368, in decode
raise ValueError(errmsg(Extra data, s, end, len(s)))
ValueError: Extra data: line 1 column 9 - line 1 column 17 (char 8 - 16)

Note that this is a limitation of python's Json parser, not Json itself.

There seems to be a json.scanner module that *sounds* like it provides
this functionality,
but I couldn't find any documentation about it.

Alternatively, PyYaml can also parse streams. yaml.{dump,load}_all()
provide pickle-like unsafe (de)serialization support and
yaml.safe_{dump,load}_all provide msgpack-like safe-but-limited stream
parsing support.


also;

On 07/15/13 13:57, Chris Angelico wrote:
 But what I meant was that the [Json] protocol itself is designed with
 security restrictions in mind. It's designed not to fetch additional
 content from the network (as XML can),

Can you explain how parsing XML can fetch data from the network?


Best,
Burak
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python - remote object protocols and security

2013-07-15 Thread Burak Arslan
On 07/15/13 16:53, Chris Angelico wrote:
 I haven't looked into the details, but there was one among a list of
 exploits that was being discussed a few months ago; it involved XML
 schemas, I think, and quite a few generic XML parsers could be tricked
 into fetching arbitrary documents. Whether this could be used for
 anything more serious than a document-viewed receipt or a denial of
 service (via latency) I don't know, but if nothing else, it's a vector
 that JSON simply doesn't have. ChrisA 

I must have missed that exploit report, can you provide a link?

Parsing arbitrary xml documents and parsing xml schema documents and
applying xml schema semantics to these documents are two very different
operations.

Xml schemas are not tricked into fetching arbitrary documents,
xs:include and xs:import fetch external documents, it's a well-known
feature. If you don't want this, you should ship all of the schema
documents together and generate the schemas in a way to not include any
external references. So I'm surprised this was presented as a security
exploit.

Json schemas also have similar functionality:
http://json-schema.org/latest/json-schema-core.html#anchor30


if canonical dereferencing is used, the implementation will dereference
this URI, and fetch the content at this URI;


So I don't understand how you're so sure of yourself, but to me, it
seems like Json schemas have the same attack vectors.

Best regards,
Burak
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Convert SOAP response (ArrayOfInt) to Python list

2013-07-05 Thread Burak Arslan


Hi,

FYI, There's a soap-specific python.org list: s...@python.org


On 07/04/13 20:57, robert.wink...@bioprocess.org wrote:
 Thanks to the OSA library, which works for SOAP requests with Python 3.x, I 
 can now use SOAP services at http://www.chemspider.com.

 The results structure is 
   GetAsyncSearchResultResult
 intint/int
 intint/int
   /GetAsyncSearchResultResult

 The result is a list of accession numbers (which correspond to chemical 
 compounds) and I get them in the following format:

 [snip]

 How could I transform this to a simple python list?

I did not use OSA, but assuming print(ret) prints that, you should do
ret.int to get your list.
It should already be a regular Python list.

I hope that helps.

Best,
Burak

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Must we include urllib just to decode a URL-encoded string, when using Requests?

2013-06-13 Thread Burak Arslan

On 06/13/13 16:25, Dotan Cohen wrote:

On Thu, Jun 13, 2013 at 4:20 PM, Robert Kern robert.k...@gmail.com wrote:

Yes. Do you think there is a problem with doing so?


I'm pretty sure that Requests will use either urllib or urllib2,
depending on what is available on the server. I would like to use
whatever Requests is currently using, rather than import the other.
Can I tell which library Requests is currently using and use that?



paste this to your python console, it'll show you what modules requests 
imports:



import sys
p = set(sys.modules)
import requests
for m in sorted(set(sys.modules) - p):
  print(m)


burak
--
http://mail.python.org/mailman/listinfo/python-list


Re: serialize a class to XML and back

2013-05-23 Thread Burak Arslan

On 05/23/13 13:37, Schneider wrote:

Hi list,

how can I serialize a python class to XML? Plus a way to get the class 
back from the XML?


My aim is to store instances of this class in a database.


Hi,

I'm working on a project called Spyne (http://spyne.io). With one object 
definition, you can serialize to/from xml and save to database via 
SQLAlchemy.


The code generator on the site has examples for RPC and SQL. If you're 
not interested in doing RPC with the resulting document, here's an 
example for just the xml part: 
https://github.com/arskom/spyne/blob/master/examples/xml_utils.py


Best regards,
Burak

--
http://mail.python.org/mailman/listinfo/python-list


Re: Parsing soap result

2013-04-18 Thread Burak Arslan


Hi,

On 04/18/13 13:46, Ombongi Moraa Fe wrote:

Hi Burak, Team,



Apparently I was too deep in answering support questions for my company 
:) This is python-list, so It's just me here :)



Your solution worked perfectly thanks.

Could you share the logic of this solution?



You're using suds. Let's have a look at what you see:

[(DeliveryInformation){
   address = 254727
   deliveryStatus = DeliveredToNetwork
 }]

You have it in square brackets, so it's an array. You apparently want 
the first element, so it's result[0]. It's of type DeliveryInformation 
with two fields, they are what you see there. Depending on the which 
soap mode (rpc/document) your server uses, you should either use 
result[0].deliveryStatus or result[0].DeliveryInformation.deliveryStatus.


I guess I got too much experience doing SOAP with python :) (I maintain 
spyne, see: http://spyne.io)


I'm glad it worked.

Best,
Burak

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parsing soap result

2013-04-17 Thread Burak Arslan

On 04/17/13 16:50, Ombongi Moraa Fe wrote:

My

client.service.gere(ri)

method call logs the below soap response in my log file.

?xml version=1.0 encoding=utf-8 ?soapenv:Envelope 
xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;soapenv:Bodyns1:gere 
xmlns:ns1=http://www.csapi.org/schema/parlayx/sms/send/v2_2/local;ns1:resultaddress254727/addressdeliveryStatusDeliveredToNetwork/deliveryStatus/ns1:result/ns1:gere/soapenv:Body/soapenv:Envelope



If I assign the client.service.gere(ri) to a variable, i get the 
output on my screen:


result=client.service.gere(ri)

output:
[(DeliveryInformation){
   address = 254727
   deliveryStatus = DeliveredToNetwork
 }]

string functions replace() and strip don't work.

how do I use xml.etree.ElementTree to print the parameters address and 
deliveryStatus? Or is there a better python method?


hi,

try:

result[0].deliveryStatus

or

result[0].DeliveryInformation.deliveryStatus


and let us know.

best,
burak

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Passing soap response to python script

2013-04-05 Thread Burak Arslan

Hello,

On 04/05/13 12:52, Ombongi Moraa Fe wrote:

Hello Group,

I am newbie to python and getting my way around. However, my first project
that introduced me to the language deals with SOAP requests.


Before going any further, there's a project called suds which 
implements a soap client. If the service you're consuming exposes a WSDL 
document, it's very easy to get started with suds. Here's its home page: 
https://fedorahosted.org/suds/




The server I communicate with basically sends me 2 soap responses; One with
a requestIdentifier which I should use to query the delivery status of the
message originally sent. Therefore, my main.py script already calls the
client.last_received() which contains 1 value for requestIdentifier.

I have another script getDelivery.py; In this script I need to pass 6
parameters which we used in the main.py script and add the
requestIdentifier parameter - total 7 paramters.

I want to call the getDelivery.py script at the end of main.py script after
the client.last_received().

q1) How do I get the requestIdentifier parameter that is received through
the client.last_received() method call?

q2)  How do I pass these 7 parameters to the method call from the main.py
script?

q3) And how do I receive these passed parameters in the getDelivery.py
script?


It's a bit difficult to answer as you didn't provide any source code, 
but let me try.


First, please modify the code inside getDelivery.py to be inside a 
function. for argument's sake, let's say it's called 'some_function'. 
Then, in the main.py, you can import the function and call it. Assuming 
getDelivery.py and main.py are in the same directory, this should work:


from getDelivery import some_function

def main():
# some code that defines the 6 other parameters somehow
# (...)

last = client.last_received

some_function(param1, param2, param3, param4, param5, param6, last)


I am conversant with the php post/get but still having trouble doing the
same in python. I've checked on urllib2 and request modules but still
having trouble understanding their implementation.


First, just to confirm: urllib2 is a http client library. It's not a 
http server.


The most fashionable http client library in the python world nowadays is 
requests: http://www.python-requests.org/


It's pretty easy to use and actually, it wraps urllib2. You can give it 
a try.


I hope that helps.

Best regards,
Burak

PS: If you're not sure about how to install Python packages, I'm sure a 
quick google would turn return a lot of resources.


--
http://mail.python.org/mailman/listinfo/python-list


[issue13284] email.utils.formatdate function does not handle timezones correctly.

2011-10-28 Thread Burak Arslan

New submission from Burak Arslan burak.ars...@arskom.com.tr:

There's an issue with email.utils.formatdate function, illustrated here: 
https://gist.github.com/1321994

for reference i'm on Europe/Istanbul timezone, which is +03:00 because of DST 
at the time of this writing.

I'm on stable Python 2.7.2 on gentoo linux. 

When I run the attached script, I get:

Fri, 28 Oct 2011 07:56:14 -
datetime.datetime(2011, 10, 28, 9, 56, 14, 945831, tzinfo=UTC)

when the local time is 12:56. so the second line is correct and first one is 
not.

let me know if you need any more information.
thanks for your attention.

--
components: Library (Lib)
files: test_formatdate.py
messages: 146551
nosy: burak.arslan
priority: normal
severity: normal
status: open
title: email.utils.formatdate function does not handle timezones correctly.
versions: Python 2.7
Added file: http://bugs.python.org/file23541/test_formatdate.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13284] email.utils.formatdate function does not handle timezones correctly.

2011-10-28 Thread Burak Arslan

Burak Arslan burak.ars...@arskom.com.tr added the comment:

turns out timetuple was not passing timezone information. the correct way of 
converting a datetime.datetime object to a correct rfc-2822 compliant date 
string seems to be:

email.utils.formatdate(time.mktime(a.utctimetuple()) + 1e-6 * a.microsecond - 
time.timezone)

what a mess. if the above is indeed the right way to do this, is it possible to 
add the following function to the email.utils module?

def formatdatetime(dt_object):
return email.utils.formatdate(time.mktime(dt_object.utctimetuple()) + 1e-6 
* a.microsecond - time.timezone)

this works for datetime instances both with and without time zone information.

ps: i updated the code in the github link but not here.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue13284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com