Hello community,

here is the log from the commit of package python-sqlparse for openSUSE:Factory 
checked in at 2015-05-18 22:26:28
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-sqlparse (Old)
 and      /work/SRC/openSUSE:Factory/.python-sqlparse.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-sqlparse"

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-sqlparse/python-sqlparse.changes  
2014-11-18 22:45:47.000000000 +0100
+++ /work/SRC/openSUSE:Factory/.python-sqlparse.new/python-sqlparse.changes     
2015-05-18 22:26:29.000000000 +0200
@@ -1,0 +2,53 @@
+Wed May 13 16:39:54 UTC 2015 - [email protected]
+
+- update to version 0.1.15:
+  * Fix a regression for identifiers with square brackets notation
+    (issue153, by darikg).
+  * Add missing SQL types (issue154, issue155, issue156, by
+    jukebox).
+  * Fix parsing of multi-line comments (issue172, by JacekPliszka).
+  * Fix parsing of escaped backslashes (issue174, by caseyching).
+  * Fix parsing of identifiers starting with underscore (issue175).
+  * Fix misinterpretation of IN keyword (issue183).
+  * Improve formatting of HAVING statements.
+  * Improve parsing of inline comments (issue163).
+  * Group comments to parent object (issue128, issue160).
+  * Add double precision builtin (issue169, by darikg).
+  * Add support for square bracket array indexing (issue170,
+    issue176, issue177 by darikg).
+  * Improve grouping of aliased elements (issue167, by darikg).
+  * Support comments starting with '#' character (issue178).
+- additional changes from version 0.1.14:
+  * Floats in UPDATE statements are now handled correctly
+    (issue145).
+  * Properly handle string literals in comparisons (issue148,
+    change proposed by aadis).
+  * Fix indentation when using tabs (issue146).
+  * Improved formatting in list when newlines precede commas
+    (issue140).
+- additional changes from version 0.1.13:
+  * Fix a regression in handling of NULL keywords introduced in
+    0.1.12.
+- additional changes from version 0.1.12:
+  * Fix handling of NULL keywords in aliased identifiers.
+  * Fix SerializerUnicode to split unquoted newlines (issue131, by
+    Michael Schuller).
+  * Fix handling of modulo operators without spaces (by gavinwahl).
+  * Improve parsing of identifier lists containing placeholders.
+  * Speed up query parsing of unquoted lines (by Michael Schuller).
+- additional changes from version 0.1.11:
+  * Fix incorrect parsing of string literals containing line breaks
+    (issue118).
+  * Fix typo in keywords, add MERGE, COLLECT keywords
+    (issue122/124, by Cristian Orellana).
+  * Improve parsing of string literals in columns.
+  * Fix parsing and formatting of statements containing EXCEPT
+    keyword.
+  * Fix Function.get_parameters() (issue126/127, by spigwitmer).
+  * Classify DML keywords (issue116, by Victor Hahn).
+  * Add missing FOREACH keyword.
+  * Grouping of BEGIN/END blocks.
+  * Python 2.5 isn't automatically tested anymore, neither Travis
+    nor Tox still support it out of the box.
+
+-------------------------------------------------------------------

Old:
----
  sqlparse-0.1.10.tar.gz

New:
----
  sqlparse-0.1.15.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-sqlparse.spec ++++++
--- /var/tmp/diff_new_pack.6bVVw1/_old  2015-05-18 22:26:29.000000000 +0200
+++ /var/tmp/diff_new_pack.6bVVw1/_new  2015-05-18 22:26:29.000000000 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-sqlparse
 #
-# Copyright (c) 2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
+# Copyright (c) 2015 SUSE LINUX GmbH, Nuernberg, Germany.
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -17,7 +17,7 @@
 
 
 Name:           python-sqlparse
-Version:        0.1.10
+Version:        0.1.15
 Release:        0
 Summary:        Non-validating SQL parser
 License:        BSD-3-Clause

++++++ sqlparse-0.1.10.tar.gz -> sqlparse-0.1.15.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/AUTHORS new/sqlparse-0.1.15/AUTHORS
--- old/sqlparse-0.1.10/AUTHORS 2013-10-24 05:53:55.000000000 +0200
+++ new/sqlparse-0.1.15/AUTHORS 2015-04-12 06:43:27.000000000 +0200
@@ -6,15 +6,24 @@
 Alphabetical list of contributors:
 * Alexander Beedie <[email protected]>
 * Alexey Malyshev <[email protected]>
+* casey <[email protected]>
+* Cristian Orellana <[email protected]>
+* Darik Gamble <[email protected]>
 * Florian Bauer <[email protected]>
+* Gavin Wahl <[email protected]>
+* JacekPliszka <[email protected]>
 * Jesús Leganés Combarro "Piranna" <[email protected]>
 * Kevin Jing Qiu <[email protected]>
+* Michael Schuller <[email protected]>
 * Mike Amy <[email protected]>
 * mulos <[email protected]>
 * Piet Delport <[email protected]>
-* prudhvi <[email protected]>
-* Robert Nix <[email protected]>
-* Yago Riveiro <[email protected]>
+* Prudhvi Vatala <[email protected]>
 * quest <[email protected]>
+* Robert Nix <[email protected]>
+* Rocky Meza <[email protected]>
+* spigwitmer <[email protected]>
+* Victor Hahn <[email protected]>
 * vthriller <[email protected]>
 * wayne.wuw <[email protected]>
+* Yago Riveiro <[email protected]>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/CHANGES new/sqlparse-0.1.15/CHANGES
--- old/sqlparse-0.1.10/CHANGES 2013-11-02 07:44:17.000000000 +0100
+++ new/sqlparse-0.1.15/CHANGES 2015-04-15 18:17:23.000000000 +0200
@@ -1,3 +1,80 @@
+Release 0.1.15 (Apr 15, 2015)
+-----------------------------
+
+Bug Fixes
+* Fix a regression for identifiers with square bracktes
+  notation (issue153, by darikg).
+* Add missing SQL types (issue154, issue155, issue156, by jukebox).
+* Fix parsing of multi-line comments (issue172, by JacekPliszka).
+* Fix parsing of escaped backslashes (issue174, by caseyching).
+* Fix parsing of identifiers starting with underscore (issue175).
+* Fix misinterpretation of IN keyword (issue183).
+
+Enhancements
+* Improve formatting of HAVING statements.
+* Improve parsing of inline comments (issue163).
+* Group comments to parent object (issue128, issue160).
+* Add double precision builtin (issue169, by darikg).
+* Add support for square bracket array indexing (issue170, issue176,
+  issue177 by darikg).
+* Improve grouping of aliased elements (issue167, by darikg).
+* Support comments starting with '#' character (issue178).
+
+
+Release 0.1.14 (Nov 30, 2014)
+-----------------------------
+
+Bug Fixes
+* Floats in UPDATE statements are now handled correctly (issue145).
+* Properly handle string literals in comparisons (issue148, change proposed
+  by aadis).
+* Fix indentation when using tabs (issue146).
+
+Enhancements
+* Improved formatting in list when newlines precede commas (issue140).
+
+
+Release 0.1.13 (Oct 09, 2014)
+-----------------------------
+
+Bug Fixes
+* Fix a regression in handling of NULL keywords introduced in 0.1.12.
+
+
+Release 0.1.12 (Sep 20, 2014)
+-----------------------------
+
+Bug Fixes
+* Fix handling of NULL keywords in aliased identifiers.
+* Fix SerializerUnicode to split unquoted newlines (issue131, by Michael 
Schuller).
+* Fix handling of modulo operators without spaces (by gavinwahl).
+
+Enhancements
+* Improve parsing of identifier lists containing placeholders.
+* Speed up query parsing of unquoted lines (by Michael Schuller).
+
+
+Release 0.1.11 (Feb 07, 2014)
+-----------------------------
+
+Bug Fixes
+* Fix incorrect parsing of string literals containing line breaks (issue118).
+* Fix typo in keywords, add MERGE, COLLECT keywords (issue122/124,
+  by Cristian Orellana).
+* Improve parsing of string literals in columns.
+* Fix parsing and formatting of statements containing EXCEPT keyword.
+* Fix Function.get_parameters() (issue126/127, by spigwitmer).
+
+Enhancements
+* Classify DML keywords (issue116, by Victor Hahn).
+* Add missing FOREACH keyword.
+* Grouping of BEGIN/END blocks.
+
+Other
+* Python 2.5 isn't automatically tested anymore, neither Travis nor Tox
+  still support it out of the box.
+
+
 Release 0.1.10 (Nov 02, 2013)
 -----------------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/PKG-INFO new/sqlparse-0.1.15/PKG-INFO
--- old/sqlparse-0.1.10/PKG-INFO        2013-11-02 07:46:42.000000000 +0100
+++ new/sqlparse-0.1.15/PKG-INFO        2015-04-15 18:19:17.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: sqlparse
-Version: 0.1.10
+Version: 0.1.15
 Summary: Non-validating SQL parser
 Home-page: https://github.com/andialbrecht/sqlparse
 Author: Andi Albrecht
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/README.rst 
new/sqlparse-0.1.15/README.rst
--- old/sqlparse-0.1.10/README.rst      2013-10-23 05:46:22.000000000 +0200
+++ new/sqlparse-0.1.15/README.rst      2015-04-09 11:43:40.000000000 +0200
@@ -44,7 +44,7 @@
   https://github.com/andialbrecht/sqlparse/issues
 
 Online Demo
-  http://sqlformat.appspot.com
+  http://sqlformat.org
 
 
 python-sqlparse is licensed under the BSD license.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/__init__.py 
new/sqlparse-0.1.15/sqlparse/__init__.py
--- old/sqlparse-0.1.10/sqlparse/__init__.py    2013-11-02 07:41:50.000000000 
+0100
+++ new/sqlparse-0.1.15/sqlparse/__init__.py    2015-04-15 18:17:36.000000000 
+0200
@@ -6,7 +6,7 @@
 """Parse SQL statements."""
 
 
-__version__ = '0.1.10'
+__version__ = '0.1.15'
 
 
 # Setup namespace
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/engine/grouping.py 
new/sqlparse-0.1.15/sqlparse/engine/grouping.py
--- old/sqlparse-0.1.10/sqlparse/engine/grouping.py     2013-10-24 
08:29:52.000000000 +0200
+++ new/sqlparse-0.1.15/sqlparse/engine/grouping.py     2015-04-12 
15:14:54.000000000 +0200
@@ -51,19 +51,21 @@
                                            ttype, value)
 
 
+def _find_matching(idx, tlist, start_ttype, start_value, end_ttype, end_value):
+    depth = 1
+    for tok in tlist.tokens[idx:]:
+        if tok.match(start_ttype, start_value):
+            depth += 1
+        elif tok.match(end_ttype, end_value):
+            depth -= 1
+            if depth == 1:
+                return tok
+    return None
+
+
 def _group_matching(tlist, start_ttype, start_value, end_ttype, end_value,
                     cls, include_semicolon=False, recurse=False):
-    def _find_matching(i, tl, stt, sva, ett, eva):
-        depth = 1
-        for n in xrange(i, len(tl.tokens)):
-            t = tl.tokens[n]
-            if t.match(stt, sva):
-                depth += 1
-            elif t.match(ett, eva):
-                depth -= 1
-                if depth == 1:
-                    return t
-        return None
+
     [_group_matching(sgroup, start_ttype, start_value, end_ttype, end_value,
                      cls, include_semicolon) for sgroup in tlist.get_sublists()
      if recurse]
@@ -99,6 +101,16 @@
                     sql.For, True)
 
 
+def group_foreach(tlist):
+    _group_matching(tlist, T.Keyword, 'FOREACH', T.Keyword, 'END LOOP',
+                    sql.For, True)
+
+
+def group_begin(tlist):
+    _group_matching(tlist, T.Keyword, 'BEGIN', T.Keyword, 'END',
+                    sql.Begin, True)
+
+
 def group_as(tlist):
 
     def _right_valid(token):
@@ -107,6 +119,8 @@
         return not token.ttype in (T.DML, T.DDL)
 
     def _left_valid(token):
+        if token.ttype is T.Keyword and token.value in ('NULL',):
+            return True
         return token.ttype is not T.Keyword
 
     _group_left_right(tlist, T.Keyword, 'AS', sql.Identifier,
@@ -122,9 +136,10 @@
 def group_comparison(tlist):
 
     def _parts_valid(token):
-        return (token.ttype in (T.String.Symbol, T.Name, T.Number,
+        return (token.ttype in (T.String.Symbol, T.String.Single,
+                                T.Name, T.Number, T.Number.Float,
                                 T.Number.Integer, T.Literal,
-                                T.Literal.Number.Integer)
+                                T.Literal.Number.Integer, T.Name.Placeholder)
                 or isinstance(token, (sql.Identifier, sql.Parenthesis))
                 or (token.ttype is T.Keyword
                     and token.value.upper() in ['NULL', ]))
@@ -142,14 +157,19 @@
         # TODO: Usage of Wildcard token is ambivalent here.
         x = itertools.cycle((
             lambda y: (y.match(T.Punctuation, '.')
-                       or y.ttype is T.Operator
-                       or y.ttype is T.Wildcard),
+                       or y.ttype in (T.Operator,
+                                      T.Wildcard,
+                                      T.Name)
+                       or isinstance(y, sql.SquareBrackets)),
             lambda y: (y.ttype in (T.String.Symbol,
                                    T.Name,
                                    T.Wildcard,
+                                   T.Literal.String.Single,
                                    T.Literal.Number.Integer,
                                    T.Literal.Number.Float)
-                       or isinstance(y, (sql.Parenthesis, sql.Function)))))
+                       or isinstance(y, (sql.Parenthesis,
+                                         sql.SquareBrackets,
+                                         sql.Function)))))
         for t in tl.tokens[i:]:
             # Don't take whitespaces into account.
             if t.ttype is T.Whitespace:
@@ -158,6 +178,8 @@
             if next(x)(t):
                 yield t
             else:
+                if isinstance(t, sql.Comment) and t.is_multiline():
+                    yield t
                 raise StopIteration
 
     def _next_token(tl, i):
@@ -222,6 +244,7 @@
                    lambda t: t.ttype == T.Keyword,
                    lambda t: isinstance(t, sql.Comparison),
                    lambda t: isinstance(t, sql.Comment),
+                   lambda t: t.ttype == T.Comment.Multiline,
                    ]
     tcomma = tlist.token_next_match(idx, T.Punctuation, ',')
     start = None
@@ -255,9 +278,48 @@
                 tcomma = next_
 
 
-def group_parenthesis(tlist):
-    _group_matching(tlist, T.Punctuation, '(', T.Punctuation, ')',
-                    sql.Parenthesis)
+def group_brackets(tlist):
+    """Group parentheses () or square brackets []
+
+        This is just like _group_matching, but complicated by the fact that
+        round brackets can contain square bracket groups and vice versa
+    """
+
+    if isinstance(tlist, (sql.Parenthesis, sql.SquareBrackets)):
+        idx = 1
+    else:
+        idx = 0
+
+    # Find the first opening bracket
+    token = tlist.token_next_match(idx, T.Punctuation, ['(', '['])
+
+    while token:
+        start_val = token.value  # either '(' or '['
+        if start_val == '(':
+            end_val = ')'
+            group_class = sql.Parenthesis
+        else:
+            end_val = ']'
+            group_class = sql.SquareBrackets
+
+        tidx = tlist.token_index(token)
+
+        # Find the corresponding closing bracket
+        end = _find_matching(tidx, tlist, T.Punctuation, start_val,
+                             T.Punctuation, end_val)
+
+        if end is None:
+            idx = tidx + 1
+        else:
+            group = tlist.group_tokens(group_class,
+                                       tlist.tokens_between(token, end))
+
+            # Check for nested bracket groups within this group
+            group_brackets(group)
+            idx = tlist.token_index(group) + 1
+
+        # Find the next opening bracket
+        token = tlist.token_next_match(idx, T.Punctuation, ['(', '['])
 
 
 def group_comments(tlist):
@@ -286,7 +348,7 @@
      if not isinstance(sgroup, sql.Where)]
     idx = 0
     token = tlist.token_next_match(idx, T.Keyword, 'WHERE')
-    stopwords = ('ORDER', 'GROUP', 'LIMIT', 'UNION')
+    stopwords = ('ORDER', 'GROUP', 'LIMIT', 'UNION', 'EXCEPT', 'HAVING')
     while token:
         tidx = tlist.token_index(token)
         end = tlist.token_next_match(tidx + 1, T.Keyword, stopwords)
@@ -353,10 +415,27 @@
         token = tlist.token_next_by_type(idx, T.Keyword.Order)
 
 
+def align_comments(tlist):
+    [align_comments(sgroup) for sgroup in tlist.get_sublists()]
+    idx = 0
+    token = tlist.token_next_by_instance(idx, sql.Comment)
+    while token:
+        before = tlist.token_prev(tlist.token_index(token))
+        if isinstance(before, sql.TokenList):
+            grp = tlist.tokens_between(before, token)[1:]
+            before.tokens.extend(grp)
+            for t in grp:
+                tlist.tokens.remove(t)
+            idx = tlist.token_index(before) + 1
+        else:
+            idx = tlist.token_index(token) + 1
+        token = tlist.token_next_by_instance(idx, sql.Comment)
+
+
 def group(tlist):
     for func in [
             group_comments,
-            group_parenthesis,
+            group_brackets,
             group_functions,
             group_where,
             group_case,
@@ -367,7 +446,11 @@
             group_aliased,
             group_assignment,
             group_comparison,
+            align_comments,
             group_identifier_list,
             group_if,
-            group_for]:
+            group_for,
+            group_foreach,
+            group_begin,
+            ]:
         func(tlist)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/filters.py 
new/sqlparse-0.1.15/sqlparse/filters.py
--- old/sqlparse-0.1.10/sqlparse/filters.py     2013-10-23 05:46:22.000000000 
+0200
+++ new/sqlparse-0.1.15/sqlparse/filters.py     2015-04-12 15:14:54.000000000 
+0200
@@ -11,6 +11,7 @@
 from sqlparse.tokens import (Comment, Comparison, Keyword, Name, Punctuation,
                              String, Whitespace)
 from sqlparse.utils import memoize_generator
+from sqlparse.utils import split_unquoted_newlines
 
 
 # --------------------------
@@ -245,6 +246,20 @@
                     token.value = ' '
             last_was_ws = token.is_whitespace()
 
+    def _stripws_identifierlist(self, tlist):
+        # Removes newlines before commas, see issue140
+        last_nl = None
+        for token in tlist.tokens[:]:
+            if (token.ttype is T.Punctuation
+                and token.value == ','
+                and last_nl is not None):
+                tlist.tokens.remove(last_nl)
+            if token.is_whitespace():
+                last_nl = token
+            else:
+                last_nl = None
+        return self._stripws_default(tlist)
+
     def _stripws_parenthesis(self, tlist):
         if tlist.tokens[1].is_whitespace():
             tlist.tokens.pop(1)
@@ -256,7 +271,11 @@
         [self.process(stack, sgroup, depth + 1)
          for sgroup in stmt.get_sublists()]
         self._stripws(stmt)
-        if depth == 0 and stmt.tokens[-1].is_whitespace():
+        if (
+            depth == 0
+            and stmt.tokens
+            and stmt.tokens[-1].is_whitespace()
+        ):
             stmt.tokens.pop(-1)
 
 
@@ -301,7 +320,7 @@
     def _split_kwds(self, tlist):
         split_words = ('FROM', 'STRAIGHT_JOIN$', 'JOIN$', 'AND', 'OR',
                        'GROUP', 'ORDER', 'UNION', 'VALUES',
-                       'SET', 'BETWEEN')
+                       'SET', 'BETWEEN', 'EXCEPT', 'HAVING')
 
         def _next_token(i):
             t = tlist.token_next_match(i, T.Keyword, split_words,
@@ -314,20 +333,21 @@
 
         idx = 0
         token = _next_token(idx)
+        added = set()
         while token:
             prev = tlist.token_prev(tlist.token_index(token), False)
             offset = 1
-            if prev and prev.is_whitespace():
+            if prev and prev.is_whitespace() and prev not in added:
                 tlist.tokens.pop(tlist.token_index(prev))
                 offset += 1
-            if (prev
-                and isinstance(prev, sql.Comment)
-                and (unicode(prev).endswith('\n')
-                     or unicode(prev).endswith('\r'))):
+            uprev = unicode(prev)
+            if (prev and (uprev.endswith('\n') or uprev.endswith('\r'))):
                 nl = tlist.token_next(token)
             else:
                 nl = self.nl()
+                added.add(nl)
                 tlist.insert_before(token, nl)
+                offset += 1
             token = _next_token(tlist.token_index(nl) + offset)
 
     def _split_statements(self, tlist):
@@ -351,7 +371,20 @@
 
     def _process_where(self, tlist):
         token = tlist.token_next_match(0, T.Keyword, 'WHERE')
-        tlist.insert_before(token, self.nl())
+        try:
+            tlist.insert_before(token, self.nl())
+        except ValueError:  # issue121, errors in statement
+            pass
+        self.indent += 1
+        self._process_default(tlist)
+        self.indent -= 1
+
+    def _process_having(self, tlist):
+        token = tlist.token_next_match(0, T.Keyword, 'HAVING')
+        try:
+            tlist.insert_before(token, self.nl())
+        except ValueError:  # issue121, errors in statement
+            pass
         self.indent += 1
         self._process_default(tlist)
         self.indent -= 1
@@ -375,13 +408,15 @@
         identifiers = list(tlist.get_identifiers())
         if len(identifiers) > 1 and not tlist.within(sql.Function):
             first = list(identifiers[0].flatten())[0]
-            num_offset = self._get_offset(first) - len(first.value)
+            if self.char == '\t':
+                # when using tabs we don't count the actual word length
+                # in spaces.
+                num_offset = 1
+            else:
+                num_offset = self._get_offset(first) - len(first.value)
             self.offset += num_offset
             for token in identifiers[1:]:
                 tlist.insert_before(token, self.nl())
-            for token in tlist.tokens:
-                if isinstance(token, sql.Comment):
-                    tlist.insert_after(token, self.nl())
             self.offset -= num_offset
         self._process_default(tlist)
 
@@ -534,10 +569,8 @@
 
     def process(self, stack, stmt):
         raw = unicode(stmt)
-        add_nl = raw.endswith('\n')
-        res = '\n'.join(line.rstrip() for line in raw.splitlines())
-        if add_nl:
-            res += '\n'
+        lines = split_unquoted_newlines(raw)
+        res = '\n'.join(line.rstrip() for line in lines)
         return res
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/keywords.py 
new/sqlparse-0.1.15/sqlparse/keywords.py
--- old/sqlparse-0.1.10/sqlparse/keywords.py    2013-10-24 05:52:50.000000000 
+0200
+++ new/sqlparse-0.1.15/sqlparse/keywords.py    2015-04-12 06:43:27.000000000 
+0200
@@ -61,19 +61,20 @@
     'CLOB': tokens.Keyword,
     'CLOSE': tokens.Keyword,
     'CLUSTER': tokens.Keyword,
-    'COALSECE': tokens.Keyword,
+    'COALESCE': tokens.Keyword,
     'COBOL': tokens.Keyword,
     'COLLATE': tokens.Keyword,
     'COLLATION': tokens.Keyword,
     'COLLATION_CATALOG': tokens.Keyword,
     'COLLATION_NAME': tokens.Keyword,
     'COLLATION_SCHEMA': tokens.Keyword,
+    'COLLECT': tokens.Keyword,
     'COLUMN': tokens.Keyword,
     'COLUMN_NAME': tokens.Keyword,
     'COMMAND_FUNCTION': tokens.Keyword,
     'COMMAND_FUNCTION_CODE': tokens.Keyword,
     'COMMENT': tokens.Keyword,
-    'COMMIT': tokens.Keyword,
+    'COMMIT': tokens.Keyword.DML,
     'COMMITTED': tokens.Keyword,
     'COMPLETION': tokens.Keyword,
     'CONDITION_NUMBER': tokens.Keyword,
@@ -163,6 +164,7 @@
     'FINAL': tokens.Keyword,
     'FIRST': tokens.Keyword,
     'FORCE': tokens.Keyword,
+    'FOREACH': tokens.Keyword,
     'FOREIGN': tokens.Keyword,
     'FORTRAN': tokens.Keyword,
     'FORWARD': tokens.Keyword,
@@ -355,7 +357,7 @@
     'REVOKE': tokens.Keyword,
     'RIGHT': tokens.Keyword,
     'ROLE': tokens.Keyword,
-    'ROLLBACK': tokens.Keyword,
+    'ROLLBACK': tokens.Keyword.DML,
     'ROLLUP': tokens.Keyword,
     'ROUTINE': tokens.Keyword,
     'ROUTINE_CATALOG': tokens.Keyword,
@@ -401,7 +403,7 @@
     'SQLSTATE': tokens.Keyword,
     'SQLWARNING': tokens.Keyword,
     'STABLE': tokens.Keyword,
-    'START': tokens.Keyword,
+    'START': tokens.Keyword.DML,
     'STATE': tokens.Keyword,
     'STATEMENT': tokens.Keyword,
     'STATIC': tokens.Keyword,
@@ -492,7 +494,7 @@
 
     'ZONE': tokens.Keyword,
 
-
+    # Name.Builtin
     'ARRAY': tokens.Name.Builtin,
     'BIGINT': tokens.Name.Builtin,
     'BINARY': tokens.Name.Builtin,
@@ -506,6 +508,7 @@
     'DECIMAL': tokens.Name.Builtin,
     'FLOAT': tokens.Name.Builtin,
     'INT': tokens.Name.Builtin,
+    'INT8': tokens.Name.Builtin,
     'INTEGER': tokens.Name.Builtin,
     'INTERVAL': tokens.Name.Builtin,
     'LONG': tokens.Name.Builtin,
@@ -513,13 +516,15 @@
     'NUMERIC': tokens.Name.Builtin,
     'REAL': tokens.Name.Builtin,
     'SERIAL': tokens.Name.Builtin,
+    'SERIAL8': tokens.Name.Builtin,
+    'SIGNED': tokens.Name.Builtin,
     'SMALLINT': tokens.Name.Builtin,
+    'TEXT': tokens.Name.Builtin,
+    'TINYINT': tokens.Name.Builtin,
+    'UNSIGNED': tokens.Name.Builtin,
     'VARCHAR': tokens.Name.Builtin,
     'VARCHAR2': tokens.Name.Builtin,
     'VARYING': tokens.Name.Builtin,
-    'INT8': tokens.Name.Builtin,
-    'SERIAL8': tokens.Name.Builtin,
-    'TEXT': tokens.Name.Builtin,
 }
 
 
@@ -529,6 +534,7 @@
     'DELETE': tokens.Keyword.DML,
     'UPDATE': tokens.Keyword.DML,
     'REPLACE': tokens.Keyword.DML,
+    'MERGE': tokens.Keyword.DML,
     'DROP': tokens.Keyword.DDL,
     'CREATE': tokens.Keyword.DDL,
     'ALTER': tokens.Keyword.DDL,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/lexer.py 
new/sqlparse-0.1.15/sqlparse/lexer.py
--- old/sqlparse-0.1.10/sqlparse/lexer.py       2013-10-24 05:52:50.000000000 
+0200
+++ new/sqlparse-0.1.15/sqlparse/lexer.py       2015-04-12 15:14:54.000000000 
+0200
@@ -164,10 +164,10 @@
 
     tokens = {
         'root': [
-            (r'--.*?(\r\n|\r|\n)', tokens.Comment.Single),
+            (r'(--|#).*?(\r\n|\r|\n)', tokens.Comment.Single),
             # $ matches *before* newline, therefore we have two patterns
             # to match Comment.Single
-            (r'--.*?$', tokens.Comment.Single),
+            (r'(--|#).*?$', tokens.Comment.Single),
             (r'(\r\n|\r|\n)', tokens.Newline),
             (r'\s+', tokens.Whitespace),
             (r'/\*', tokens.Comment.Multiline, 'multiline-comments'),
@@ -180,27 +180,34 @@
             (r'\$([^\W\d]\w*)?\$', tokens.Name.Builtin),
             (r'\?{1}', tokens.Name.Placeholder),
             (r'%\(\w+\)s', tokens.Name.Placeholder),
-            (r'[$:?%]\w+', tokens.Name.Placeholder),
+            (r'%s', tokens.Name.Placeholder),
+            (r'[$:?]\w+', tokens.Name.Placeholder),
             # FIXME(andi): VALUES shouldn't be listed here
             # see https://github.com/andialbrecht/sqlparse/pull/64
             (r'VALUES', tokens.Keyword),
             (r'@[^\W\d_]\w+', tokens.Name),
+            # IN is special, it may be followed by a parenthesis, but
+            # is never a functino, see issue183
+            (r'in\b(?=[ (])?', tokens.Keyword),
             (r'[^\W\d_]\w*(?=[.(])', tokens.Name),  # see issue39
             (r'[-]?0x[0-9a-fA-F]+', tokens.Number.Hexadecimal),
             (r'[-]?[0-9]*(\.[0-9]+)?[eE][-]?[0-9]+', tokens.Number.Float),
             (r'[-]?[0-9]*\.[0-9]+', tokens.Number.Float),
             (r'[-]?[0-9]+', tokens.Number.Integer),
-            # TODO: Backslash escapes?
-            (r"(''|'.*?[^\\]')", tokens.String.Single),
+            (r"'(''|\\\\|\\'|[^'])*'", tokens.String.Single),
             # not a real string literal in ANSI SQL:
             (r'(""|".*?[^\\]")', tokens.String.Symbol),
-            (r'(\[.*[^\]]\])', tokens.Name),
+            # sqlite names can be escaped with [square brackets]. left bracket
+            # cannot be preceded by word character or a right bracket --
+            # otherwise it's probably an array index
+            (r'(?<![\w\])])(\[[^\]]+\])', tokens.Name),
             
(r'((LEFT\s+|RIGHT\s+|FULL\s+)?(INNER\s+|OUTER\s+|STRAIGHT\s+)?|(CROSS\s+|NATURAL\s+)?)?JOIN\b',
 tokens.Keyword),
             (r'END(\s+IF|\s+LOOP)?\b', tokens.Keyword),
             (r'NOT NULL\b', tokens.Keyword),
             (r'CREATE(\s+OR\s+REPLACE)?\b', tokens.Keyword.DDL),
+            (r'DOUBLE\s+PRECISION\b', tokens.Name.Builtin),
             (r'(?<=\.)[^\W\d_]\w*', tokens.Name),
-            (r'[^\W\d_]\w*', is_keyword),
+            (r'[^\W\d]\w*', is_keyword),
             (r'[;:()\[\],\.]', tokens.Punctuation),
             (r'[<>=~!]+', tokens.Operator.Comparison),
             (r'[+/@#%^&|`?^-]+', tokens.Operator),
@@ -209,7 +216,7 @@
             (r'/\*', tokens.Comment.Multiline, 'multiline-comments'),
             (r'\*/', tokens.Comment.Multiline, '#pop'),
             (r'[^/\*]+', tokens.Comment.Multiline),
-            (r'[/*]', tokens.Comment.Multiline)
+            (r'[/*]', tokens.Comment.Multiline),
         ]}
 
     def __init__(self):
@@ -290,7 +297,6 @@
             for rexmatch, action, new_state in statetokens:
                 m = rexmatch(text, pos)
                 if m:
-                    # print rex.pattern
                     value = m.group()
                     if value in known_names:
                         yield pos, known_names[value], value
@@ -312,7 +318,13 @@
                                     statestack.pop()
                                 elif state == '#push':
                                     statestack.append(statestack[-1])
-                                else:
+                                elif (
+                                    # Ugly hack - multiline-comments
+                                    # are not stackable
+                                    state != 'multiline-comments'
+                                    or not statestack
+                                    or statestack[-1] != 'multiline-comments'
+                                ):
                                     statestack.append(state)
                         elif isinstance(new_state, int):
                             # pop
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/sql.py 
new/sqlparse-0.1.15/sqlparse/sql.py
--- old/sqlparse-0.1.10/sqlparse/sql.py 2013-10-24 08:37:21.000000000 +0200
+++ new/sqlparse-0.1.15/sqlparse/sql.py 2015-04-12 15:14:54.000000000 +0200
@@ -390,21 +390,17 @@
 
     def get_alias(self):
         """Returns the alias for this identifier or ``None``."""
+
+        # "name AS alias"
         kw = self.token_next_match(0, T.Keyword, 'AS')
         if kw is not None:
-            alias = self.token_next(self.token_index(kw))
-            if alias is None:
-                return None
-        else:
-            next_ = self.token_next_by_instance(0, Identifier)
-            if next_ is None:
-                next_ = self.token_next_by_type(0, T.String.Symbol)
-                if next_ is None:
-                    return None
-            alias = next_
-        if isinstance(alias, Identifier):
-            return alias.get_name()
-        return self._remove_quotes(unicode(alias))
+            return self._get_first_name(kw, keywords=True)
+
+        # "name alias" or "complicated column expression alias"
+        if len(self.tokens) > 2:
+            return self._get_first_name(reverse=True)
+
+        return None
 
     def get_name(self):
         """Returns the name of this identifier.
@@ -422,18 +418,43 @@
         """Returns the real name (object name) of this identifier."""
         # a.b
         dot = self.token_next_match(0, T.Punctuation, '.')
+        if dot is not None:
+            return self._get_first_name(self.token_index(dot))
+
+        return self._get_first_name()
+
+    def get_parent_name(self):
+        """Return name of the parent object if any.
+
+        A parent object is identified by the first occuring dot.
+        """
+        dot = self.token_next_match(0, T.Punctuation, '.')
         if dot is None:
-            next_ = self.token_next_by_type(0, T.Name)
-            if next_ is not None:
-                return self._remove_quotes(next_.value)
             return None
-
-        next_ = self.token_next_by_type(self.token_index(dot),
-                                        (T.Name, T.Wildcard, T.String.Symbol))
-        if next_ is None:  # invalid identifier, e.g. "a."
+        prev_ = self.token_prev(self.token_index(dot))
+        if prev_ is None:  # something must be verry wrong here..
             return None
-        return self._remove_quotes(next_.value)
+        return self._remove_quotes(prev_.value)
+
+    def _get_first_name(self, idx=None, reverse=False, keywords=False):
+        """Returns the name of the first token with a name"""
+
+        if idx and not isinstance(idx, int):
+            idx = self.token_index(idx) + 1
 
+        tokens = self.tokens[idx:] if idx else self.tokens
+        tokens = reversed(tokens) if reverse else tokens
+        types = [T.Name, T.Wildcard, T.String.Symbol]
+
+        if keywords:
+            types.append(T.Keyword)
+
+        for tok in tokens:
+            if tok.ttype in types:
+                return self._remove_quotes(tok.value)
+            elif isinstance(tok, Identifier) or isinstance(tok, Function):
+                return tok.get_name()
+        return None
 
 class Statement(TokenList):
     """Represents a SQL statement."""
@@ -467,19 +488,6 @@
 
     __slots__ = ('value', 'ttype', 'tokens')
 
-    def get_parent_name(self):
-        """Return name of the parent object if any.
-
-        A parent object is identified by the first occuring dot.
-        """
-        dot = self.token_next_match(0, T.Punctuation, '.')
-        if dot is None:
-            return None
-        prev_ = self.token_prev(self.token_index(dot))
-        if prev_ is None:  # something must be verry wrong here..
-            return None
-        return self._remove_quotes(prev_.value)
-
     def is_wildcard(self):
         """Return ``True`` if this identifier contains a wildcard."""
         token = self.token_next_by_type(0, T.Wildcard)
@@ -502,6 +510,14 @@
             return None
         return ordering.value.upper()
 
+    def get_array_indices(self):
+        """Returns an iterator of index token lists"""
+
+        for tok in self.tokens:
+            if isinstance(tok, SquareBrackets):
+                # Use [1:-1] index to discard the square brackets
+                yield tok.tokens[1:-1]
+
 
 class IdentifierList(TokenList):
     """A list of :class:`~sqlparse.sql.Identifier`\'s."""
@@ -527,6 +543,15 @@
         return self.tokens[1:-1]
 
 
+class SquareBrackets(TokenList):
+    """Tokens between square brackets"""
+
+    __slots__ = ('value', 'ttype', 'tokens')
+
+    @property
+    def _groupable_tokens(self):
+        return self.tokens[1:-1]
+
 class Assignment(TokenList):
     """An assignment like 'var := val;'"""
     __slots__ = ('value', 'ttype', 'tokens')
@@ -559,6 +584,9 @@
     """A comment."""
     __slots__ = ('value', 'ttype', 'tokens')
 
+    def is_multiline(self):
+        return self.tokens and self.tokens[0].ttype == T.Comment.Multiline
+
 
 class Where(TokenList):
     """A WHERE clause."""
@@ -626,6 +654,14 @@
         for t in parenthesis.tokens:
             if isinstance(t, IdentifierList):
                 return t.get_identifiers()
-            elif isinstance(t, Identifier):
+            elif isinstance(t, Identifier) or \
+                isinstance(t, Function) or \
+                t.ttype in T.Literal:
                 return [t,]
         return []
+
+
+class Begin(TokenList):
+    """A BEGIN/END block."""
+
+    __slots__ = ('value', 'ttype', 'tokens')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse/utils.py 
new/sqlparse-0.1.15/sqlparse/utils.py
--- old/sqlparse-0.1.10/sqlparse/utils.py       2013-10-23 05:46:22.000000000 
+0200
+++ new/sqlparse-0.1.15/sqlparse/utils.py       2015-04-12 15:14:54.000000000 
+0200
@@ -4,6 +4,8 @@
 @author: piranna
 '''
 
+import re
+
 try:
     from collections import OrderedDict
 except ImportError:
@@ -94,3 +96,42 @@
                 yield item
 
     return wrapped_func
+
+
+# This regular expression replaces the home-cooked parser that was here before.
+# It is much faster, but requires an extra post-processing step to get the
+# desired results (that are compatible with what you would expect from the
+# str.splitlines() method).
+#
+# It matches groups of characters: newlines, quoted strings, or unquoted text,
+# and splits on that basis. The post-processing step puts those back together
+# into the actual lines of SQL.
+SPLIT_REGEX = re.compile(r"""
+(
+ (?:                     # Start of non-capturing group
+  (?:\r\n|\r|\n)      |  # Match any single newline, or
+  [^\r\n'"]+          |  # Match any character series without quotes or
+                         # newlines, or
+  "(?:[^"\\]|\\.)*"   |  # Match double-quoted strings, or
+  '(?:[^'\\]|\\.)*'      # Match single quoted strings
+ )
+)
+""", re.VERBOSE)
+
+LINE_MATCH = re.compile(r'(\r\n|\r|\n)')
+
+def split_unquoted_newlines(text):
+    """Split a string on all unquoted newlines.
+
+    Unlike str.splitlines(), this will ignore CR/LF/CR+LF if the requisite
+    character is inside of a string."""
+    lines = SPLIT_REGEX.split(text)
+    outputlines = ['']
+    for line in lines:
+        if not line:
+            continue
+        elif LINE_MATCH.match(line):
+            outputlines.append('')
+        else:
+            outputlines[-1] += line
+    return outputlines
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/sqlparse.egg-info/PKG-INFO 
new/sqlparse-0.1.15/sqlparse.egg-info/PKG-INFO
--- old/sqlparse-0.1.10/sqlparse.egg-info/PKG-INFO      2013-11-02 
07:46:42.000000000 +0100
+++ new/sqlparse-0.1.15/sqlparse.egg-info/PKG-INFO      2015-04-15 
18:19:16.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: sqlparse
-Version: 0.1.10
+Version: 0.1.15
 Summary: Non-validating SQL parser
 Home-page: https://github.com/andialbrecht/sqlparse
 Author: Andi Albrecht
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/test_format.py 
new/sqlparse-0.1.15/tests/test_format.py
--- old/sqlparse-0.1.10/tests/test_format.py    2013-10-23 05:46:22.000000000 
+0200
+++ new/sqlparse-0.1.15/tests/test_format.py    2015-04-12 15:14:54.000000000 
+0200
@@ -61,6 +61,9 @@
         sql = 'select (/* sql starts here */ select 2)'
         res = sqlparse.format(sql, strip_comments=True)
         self.ndiffAssertEqual(res, 'select (select 2)')
+        sql = 'select (/* sql /* starts here */ select 2)'
+        res = sqlparse.format(sql, strip_comments=True)
+        self.ndiffAssertEqual(res, 'select (select 2)')
 
     def test_strip_ws(self):
         f = lambda sql: sqlparse.format(sql, strip_whitespace=True)
@@ -77,6 +80,23 @@
         s = 'select\n* /* foo */  from bar '
         self.ndiffAssertEqual(f(s), 'select * /* foo */ from bar')
 
+    def test_notransform_of_quoted_crlf(self):
+        # Make sure that CR/CR+LF characters inside string literals don't get
+        # affected by the formatter.
+
+        s1 = "SELECT some_column LIKE 'value\r'"
+        s2 = "SELECT some_column LIKE 'value\r'\r\nWHERE id = 1\n"
+        s3 = "SELECT some_column LIKE 'value\\'\r' WHERE id = 1\r"
+        s4 = "SELECT some_column LIKE 'value\\\\\\'\r' WHERE id = 1\r\n"
+
+        f = lambda x: sqlparse.format(x)
+
+        # Because of the use of
+        self.ndiffAssertEqual(f(s1), "SELECT some_column LIKE 'value\r'")
+        self.ndiffAssertEqual(f(s2), "SELECT some_column LIKE 'value\r'\nWHERE 
id = 1\n")
+        self.ndiffAssertEqual(f(s3), "SELECT some_column LIKE 'value\\'\r' 
WHERE id = 1\n")
+        self.ndiffAssertEqual(f(s4), "SELECT some_column LIKE 'value\\\\\\'\r' 
WHERE id = 1\n")
+
     def test_outputformat(self):
         sql = 'select * from foo;'
         self.assertRaises(SQLParseError, sqlparse.format, sql,
@@ -309,3 +329,18 @@
 def test_truncate_strings_doesnt_truncate_identifiers(sql):
     formatted = sqlparse.format(sql, truncate_strings=2)
     assert formatted == sql
+
+
+def test_having_produces_newline():
+    sql = (
+        'select * from foo, bar where bar.id = foo.bar_id'
+        ' having sum(bar.value) > 100')
+    formatted = sqlparse.format(sql, reindent=True)
+    expected = [
+        'select *',
+        'from foo,',
+        '     bar',
+        'where bar.id = foo.bar_id',
+        'having sum(bar.value) > 100'
+    ]
+    assert formatted == '\n'.join(expected)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/test_grouping.py 
new/sqlparse-0.1.15/tests/test_grouping.py
--- old/sqlparse-0.1.10/tests/test_grouping.py  2013-10-24 08:37:28.000000000 
+0200
+++ new/sqlparse-0.1.15/tests/test_grouping.py  2015-04-12 15:14:54.000000000 
+0200
@@ -1,5 +1,7 @@
 # -*- coding: utf-8 -*-
 
+import pytest
+
 import sqlparse
 from sqlparse import sql
 from sqlparse import tokens as T
@@ -13,11 +15,12 @@
         s = 'select (select (x3) x2) and (y2) bar'
         parsed = sqlparse.parse(s)[0]
         self.ndiffAssertEqual(s, str(parsed))
-        self.assertEqual(len(parsed.tokens), 9)
+        self.assertEqual(len(parsed.tokens), 7)
         self.assert_(isinstance(parsed.tokens[2], sql.Parenthesis))
-        self.assert_(isinstance(parsed.tokens[-3], sql.Parenthesis))
-        self.assertEqual(len(parsed.tokens[2].tokens), 7)
-        self.assert_(isinstance(parsed.tokens[2].tokens[3], sql.Parenthesis))
+        self.assert_(isinstance(parsed.tokens[-1], sql.Identifier))
+        self.assertEqual(len(parsed.tokens[2].tokens), 5)
+        self.assert_(isinstance(parsed.tokens[2].tokens[3], sql.Identifier))
+        self.assert_(isinstance(parsed.tokens[2].tokens[3].tokens[0], 
sql.Parenthesis))
         self.assertEqual(len(parsed.tokens[2].tokens[3].tokens), 3)
 
     def test_comments(self):
@@ -129,6 +132,12 @@
         l = p.tokens[2]
         self.assertEqual(len(l.tokens), 13)
 
+    def test_identifier_list_with_inline_comments(self):  # issue163
+        p = sqlparse.parse('foo /* a comment */, bar')[0]
+        self.assert_(isinstance(p.tokens[0], sql.IdentifierList))
+        self.assert_(isinstance(p.tokens[0].tokens[0], sql.Identifier))
+        self.assert_(isinstance(p.tokens[0].tokens[3], sql.Identifier))
+
     def test_where(self):
         s = 'select * from foo where bar = 1 order by id desc'
         p = sqlparse.parse(s)[0]
@@ -137,7 +146,7 @@
         s = 'select x from (select y from foo where bar = 1) z'
         p = sqlparse.parse(s)[0]
         self.ndiffAssertEqual(s, unicode(p))
-        self.assertTrue(isinstance(p.tokens[-3].tokens[-2], sql.Where))
+        self.assertTrue(isinstance(p.tokens[-1].tokens[0].tokens[-2], 
sql.Where))
 
     def test_typecast(self):
         s = 'select foo::integer from bar'
@@ -198,6 +207,12 @@
         self.assert_(isinstance(p.tokens[0], sql.Function))
         self.assertEqual(len(list(p.tokens[0].get_parameters())), 2)
 
+    def test_function_not_in(self):  # issue183
+        p = sqlparse.parse('in(1, 2)')[0]
+        self.assertEqual(len(p.tokens), 2)
+        self.assertEqual(p.tokens[0].ttype, T.Keyword)
+        self.assert_(isinstance(p.tokens[1], sql.Parenthesis))
+
     def test_varchar(self):
         p = sqlparse.parse('"text" Varchar(50) NOT NULL')[0]
         self.assert_(isinstance(p.tokens[2], sql.Function))
@@ -236,6 +251,12 @@
     assert p.tokens[1].ttype is T.Whitespace
 
 
+def test_identifier_with_string_literals():
+    p = sqlparse.parse('foo + \'bar\'')[0]
+    assert len(p.tokens) == 1
+    assert isinstance(p.tokens[0], sql.Identifier)
+
+
 # This test seems to be wrong. It was introduced when fixing #53, but #111
 # showed that this shouldn't be an identifier at all. I'm leaving this
 # commented in the source for a while.
@@ -270,6 +291,15 @@
     assert isinstance(p.tokens[0], sql.Comparison)
 
 
+def test_comparison_with_floats():  # issue145
+    p = sqlparse.parse('foo = 25.5')[0]
+    assert len(p.tokens) == 1
+    assert isinstance(p.tokens[0], sql.Comparison)
+    assert len(p.tokens[0].tokens) == 5
+    assert p.tokens[0].left.value == 'foo'
+    assert p.tokens[0].right.value == '25.5'
+
+
 def test_comparison_with_parenthesis():  # issue23
     p = sqlparse.parse('(3 + 4) = 7')[0]
     assert len(p.tokens) == 1
@@ -277,3 +307,88 @@
     comp = p.tokens[0]
     assert isinstance(comp.left, sql.Parenthesis)
     assert comp.right.ttype is T.Number.Integer
+
+
+def test_comparison_with_strings():  # issue148
+    p = sqlparse.parse('foo = \'bar\'')[0]
+    assert len(p.tokens) == 1
+    assert isinstance(p.tokens[0], sql.Comparison)
+    assert p.tokens[0].right.value == '\'bar\''
+    assert p.tokens[0].right.ttype == T.String.Single
+
+
[email protected]('start', ['FOR', 'FOREACH'])
+def test_forloops(start):
+    p = sqlparse.parse('%s foo in bar LOOP foobar END LOOP' % start)[0]
+    assert (len(p.tokens)) == 1
+    assert isinstance(p.tokens[0], sql.For)
+
+
+def test_nested_for():
+    p = sqlparse.parse('FOR foo LOOP FOR bar LOOP END LOOP END LOOP')[0]
+    assert len(p.tokens) == 1
+    for1 = p.tokens[0]
+    assert for1.tokens[0].value == 'FOR'
+    assert for1.tokens[-1].value == 'END LOOP'
+    for2 = for1.tokens[6]
+    assert isinstance(for2, sql.For)
+    assert for2.tokens[0].value == 'FOR'
+    assert for2.tokens[-1].value == 'END LOOP'
+
+
+def test_begin():
+    p = sqlparse.parse('BEGIN foo END')[0]
+    assert len(p.tokens) == 1
+    assert isinstance(p.tokens[0], sql.Begin)
+
+
+def test_nested_begin():
+    p = sqlparse.parse('BEGIN foo BEGIN bar END END')[0]
+    assert len(p.tokens) == 1
+    outer = p.tokens[0]
+    assert outer.tokens[0].value == 'BEGIN'
+    assert outer.tokens[-1].value == 'END'
+    inner = outer.tokens[4]
+    assert inner.tokens[0].value == 'BEGIN'
+    assert inner.tokens[-1].value == 'END'
+    assert isinstance(inner, sql.Begin)
+
+
+def test_aliased_column_without_as():
+    p = sqlparse.parse('foo bar')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_real_name() == 'foo'
+    assert p[0].get_alias() == 'bar'
+
+    p = sqlparse.parse('foo.bar baz')[0].tokens[0]
+    assert p.get_parent_name() == 'foo'
+    assert p.get_real_name() == 'bar'
+    assert p.get_alias() == 'baz'
+
+
+def test_qualified_function():
+    p = sqlparse.parse('foo()')[0].tokens[0]
+    assert p.get_parent_name() is None
+    assert p.get_real_name() == 'foo'
+
+    p = sqlparse.parse('foo.bar()')[0].tokens[0]
+    assert p.get_parent_name() == 'foo'
+    assert p.get_real_name() == 'bar'
+
+
+def test_aliased_function_without_as():
+    p = sqlparse.parse('foo() bar')[0].tokens[0]
+    assert p.get_parent_name() is None
+    assert p.get_real_name() == 'foo'
+    assert p.get_alias() == 'bar'
+
+    p = sqlparse.parse('foo.bar() baz')[0].tokens[0]
+    assert p.get_parent_name() == 'foo'
+    assert p.get_real_name() == 'bar'
+    assert p.get_alias() == 'baz'
+
+
+def test_aliased_literal_without_as():
+    p = sqlparse.parse('1 foo')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_alias() == 'foo'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/test_parse.py 
new/sqlparse-0.1.15/tests/test_parse.py
--- old/sqlparse-0.1.10/tests/test_parse.py     2013-10-23 05:46:22.000000000 
+0200
+++ new/sqlparse-0.1.15/tests/test_parse.py     2015-04-12 15:14:54.000000000 
+0200
@@ -99,6 +99,10 @@
         self.assert_(t[-1].ttype is sqlparse.tokens.Name.Placeholder)
         self.assertEqual(t[-1].value, '$a')
 
+    def test_modulo_not_placeholder(self):
+        tokens = list(sqlparse.lexer.tokenize('x %3'))
+        self.assertEqual(tokens[2][0], sqlparse.tokens.Operator)
+
     def test_access_symbol(self):  # see issue27
         t = sqlparse.parse('select a.[foo bar] as foo')[0].tokens
         self.assert_(isinstance(t[-1], sqlparse.sql.Identifier))
@@ -106,6 +110,13 @@
         self.assertEqual(t[-1].get_real_name(), '[foo bar]')
         self.assertEqual(t[-1].get_parent_name(), 'a')
 
+    def test_square_brackets_notation_isnt_too_greedy(self):  # see issue153
+        t = sqlparse.parse('[foo], [bar]')[0].tokens
+        self.assert_(isinstance(t[0], sqlparse.sql.IdentifierList))
+        self.assertEqual(len(t[0].tokens), 4)
+        self.assertEqual(t[0].tokens[0].get_real_name(), '[foo]')
+        self.assertEqual(t[0].tokens[-1].get_real_name(), '[bar]')
+
     def test_keyword_like_identifier(self):  # see issue47
         t = sqlparse.parse('foo.key')[0].tokens
         self.assertEqual(len(t), 1)
@@ -116,6 +127,16 @@
         self.assertEqual(len(t), 1)
         self.assert_(isinstance(t[0], sqlparse.sql.Identifier))
 
+    def test_function_param_single_literal(self):
+        t = sqlparse.parse('foo(5)')[0].tokens[0].get_parameters()
+        self.assertEqual(len(t), 1)
+        self.assert_(t[0].ttype is T.Number.Integer)
+
+    def test_nested_function(self):
+        t = sqlparse.parse('foo(bar(5))')[0].tokens[0].get_parameters()
+        self.assertEqual(len(t), 1)
+        self.assert_(type(t[0]) is sqlparse.sql.Function)
+
 
 def test_quoted_identifier():
     t = sqlparse.parse('select x.y as "z" from foo')[0].tokens
@@ -124,6 +145,15 @@
     assert t[2].get_real_name() == 'y'
 
 
[email protected]('name', [
+    'foo',
+    '_foo',
+])
+def test_valid_identifier_names(name):  # issue175
+    t = sqlparse.parse(name)[0].tokens
+    assert isinstance(t[0], sqlparse.sql.Identifier)
+
+
 def test_psql_quotation_marks():  # issue83
     # regression: make sure plain $$ work
     t = sqlparse.split("""
@@ -145,6 +175,14 @@
     assert len(t) == 2
 
 
+def test_double_precision_is_builtin():
+    sql = 'DOUBLE PRECISION'
+    t = sqlparse.parse(sql)[0].tokens
+    assert (len(t) == 1
+            and t[0].ttype == sqlparse.tokens.Name.Builtin
+            and t[0].value == 'DOUBLE PRECISION')
+
+
 @pytest.mark.parametrize('ph', ['?', ':1', ':foo', '%s', '%(foo)s'])
 def test_placeholder(ph):
     p = sqlparse.parse(ph)[0].tokens
@@ -169,3 +207,89 @@
     p = sqlparse.parse('"foo"')[0].tokens
     assert len(p) == 1
     assert isinstance(p[0], sqlparse.sql.Identifier)
+
+
+def test_single_quotes_with_linebreaks():  # issue118
+    p = sqlparse.parse("'f\nf'")[0].tokens
+    assert len(p) == 1
+    assert p[0].ttype is T.String.Single
+
+
+def test_sqlite_identifiers():
+    # Make sure we still parse sqlite style escapes
+    p = sqlparse.parse('[col1],[col2]')[0].tokens
+    assert (len(p) == 1
+            and isinstance(p[0], sqlparse.sql.IdentifierList)
+            and [id.get_name() for id in p[0].get_identifiers()]
+                    == ['[col1]', '[col2]'])
+
+    p = sqlparse.parse('[col1]+[col2]')[0]
+    types = [tok.ttype for tok in p.flatten()]
+    assert types == [T.Name, T.Operator, T.Name]
+
+
+def test_simple_1d_array_index():
+    p = sqlparse.parse('col[1]')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_name() == 'col'
+    indices = list(p[0].get_array_indices())
+    assert (len(indices) == 1  # 1-dimensional index
+            and len(indices[0]) == 1  # index is single token
+            and indices[0][0].value == '1')
+
+
+def test_2d_array_index():
+    p = sqlparse.parse('col[x][(y+1)*2]')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_name() == 'col'
+    assert len(list(p[0].get_array_indices())) == 2  # 2-dimensional index
+
+
+def test_array_index_function_result():
+    p = sqlparse.parse('somefunc()[1]')[0].tokens
+    assert len(p) == 1
+    assert len(list(p[0].get_array_indices())) == 1
+
+
+def test_schema_qualified_array_index():
+    p = sqlparse.parse('schem.col[1]')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_parent_name() == 'schem'
+    assert p[0].get_name() == 'col'
+    assert list(p[0].get_array_indices())[0][0].value == '1'
+
+
+def test_aliased_array_index():
+    p = sqlparse.parse('col[1] x')[0].tokens
+    assert len(p) == 1
+    assert p[0].get_alias() == 'x'
+    assert p[0].get_real_name() == 'col'
+    assert list(p[0].get_array_indices())[0][0].value == '1'
+
+
+def test_array_literal():
+    # See issue #176
+    p = sqlparse.parse('ARRAY[%s, %s]')[0]
+    assert len(p.tokens) == 2
+    assert len(list(p.flatten())) == 7
+
+
+def test_typed_array_definition():
+    # array indices aren't grouped with builtins, but make sure we can extract
+    # indentifer names
+    p = sqlparse.parse('x int, y int[], z int')[0]
+    names = [x.get_name() for x in p.get_sublists()
+             if isinstance(x, sqlparse.sql.Identifier)]
+    assert names == ['x', 'y', 'z']
+
+
[email protected]('sql', [
+    'select 1 -- foo',
+    'select 1 # foo'  # see issue178
+])
+def test_single_line_comments(sql):
+    p = sqlparse.parse(sql)[0]
+    assert len(p.tokens) == 5
+    assert p.tokens[-1].ttype == T.Comment.Single
+
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/test_regressions.py 
new/sqlparse-0.1.15/tests/test_regressions.py
--- old/sqlparse-0.1.10/tests/test_regressions.py       2013-10-23 
05:46:22.000000000 +0200
+++ new/sqlparse-0.1.15/tests/test_regressions.py       2015-04-12 
15:14:54.000000000 +0200
@@ -218,3 +218,29 @@
                             '    "price" = 1,',
                             '    "description" = NULL'])
     assert formatted == tformatted
+
+
+def test_except_formatting():
+    sql = 'SELECT 1 FROM foo WHERE 2 = 3 EXCEPT SELECT 2 FROM bar WHERE 1 = 2'
+    formatted = sqlparse.format(sql, reindent=True)
+    tformatted = '\n'.join([
+        'SELECT 1',
+        'FROM foo',
+        'WHERE 2 = 3',
+        'EXCEPT',
+        'SELECT 2',
+        'FROM bar',
+        'WHERE 1 = 2'
+    ])
+    assert formatted == tformatted
+
+
+def test_null_with_as():
+    sql = 'SELECT NULL AS c1, NULL AS c2 FROM t1'
+    formatted = sqlparse.format(sql, reindent=True)
+    tformatted = '\n'.join([
+        'SELECT NULL AS c1,',
+        '       NULL AS c2',
+        'FROM t1'
+    ])
+    assert formatted == tformatted
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/test_split.py 
new/sqlparse-0.1.15/tests/test_split.py
--- old/sqlparse-0.1.10/tests/test_split.py     2013-10-23 05:46:22.000000000 
+0200
+++ new/sqlparse-0.1.15/tests/test_split.py     2015-04-12 15:14:54.000000000 
+0200
@@ -22,6 +22,10 @@
         self.ndiffAssertEqual(unicode(stmts[0]), self._sql1)
         self.ndiffAssertEqual(unicode(stmts[1]), sql2)
 
+    def test_split_backslash(self):
+        stmts = sqlparse.parse(r"select '\\'; select '\''; select '\\\'';")
+        self.assertEqual(len(stmts), 3)
+
     def test_create_function(self):
         sql = load_file('function.sql')
         stmts = sqlparse.parse(sql)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tests/utils.py 
new/sqlparse-0.1.15/tests/utils.py
--- old/sqlparse-0.1.10/tests/utils.py  2013-10-14 06:34:00.000000000 +0200
+++ new/sqlparse-0.1.15/tests/utils.py  2015-04-12 15:14:54.000000000 +0200
@@ -8,6 +8,8 @@
 import unittest
 from StringIO import StringIO
 
+import sqlparse.utils
+
 NL = '\n'
 DIR_PATH = os.path.abspath(os.path.dirname(__file__))
 PARENT_DIR = os.path.dirname(DIR_PATH)
@@ -31,7 +33,12 @@
         if first != second:
             sfirst = unicode(first)
             ssecond = unicode(second)
-            diff = difflib.ndiff(sfirst.splitlines(), ssecond.splitlines())
+            # Using the built-in .splitlines() method here will cause incorrect
+            # results when splitting statements that have quoted CR/CR+LF
+            # characters.
+            sfirst = sqlparse.utils.split_unquoted_newlines(sfirst)
+            ssecond = sqlparse.utils.split_unquoted_newlines(ssecond)
+            diff = difflib.ndiff(sfirst, ssecond)
             fp = StringIO()
             fp.write(NL)
             fp.write(NL.join(diff))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sqlparse-0.1.10/tox.ini new/sqlparse-0.1.15/tox.ini
--- old/sqlparse-0.1.10/tox.ini 2013-10-24 06:13:19.000000000 +0200
+++ new/sqlparse-0.1.15/tox.ini 2015-04-12 15:14:54.000000000 +0200
@@ -1,5 +1,5 @@
 [tox]
-envlist=py25,py26,py27,py32,py33,pypy
+envlist=py26,py27,py32,py33,py34,pypy
 
 [testenv]
 deps=
@@ -9,10 +9,6 @@
   sqlformat --version  # Sanity check.
   py.test --cov=sqlparse/ tests
 
-[testenv:py25]
-setenv=
-  PIP_INSECURE=1
-
 [testenv:py32]
 changedir={envdir}
 commands=
@@ -30,3 +26,12 @@
   cp -r {toxinidir}/tests/ tests/
   2to3 -w --no-diffs -n tests/
   py.test --cov={envdir}/lib/python3.3/site-packages/sqlparse/ tests
+
+[testenv:py34]
+changedir={envdir}
+commands=
+  sqlformat --version  # Sanity check.
+  rm -rf tests/
+  cp -r {toxinidir}/tests/ tests/
+  2to3 -w --no-diffs -n tests/
+  py.test --cov={envdir}/lib/python3.4/site-packages/sqlparse/ tests


Reply via email to