Hi,
I have been working all day to try to make this patch applicable.

On Tue, Jun 18, 2019 at 3:05 PM Dave Page <dp...@pgadmin.org> wrote:

> Hi
>
> [please keep the maililng list CC'd]
>
> On Mon, Jun 17, 2019 at 3:05 PM Yosry Muhammad <yosry...@gmail.com> wrote:
>
>>
>>> Do you want me to ask our design guy for an icon?
>>>
>>
>> That would be great to keep things clear and separated for the users.
>>
>
> I've asked Chethana (CC'd) to create one.
>

Waiting for the icon, will set it up once it is ready.

>
>
>> Please find attached a patch to fix the problem that happened with you.
>> The problem is that I edited the primary_keys.sql file in
>> web/tools/sqleditor/templates/sqleditor/sql/default/ only, while there was
>> another one in ..../templates/sqleditor/sql/11_plus/. I wonder what happens
>> with versions before 11? are the scripts in the default/ folder used if
>> they are not found in that version folder?
>>
>> The patch also removes a few unnecessary lines of code that I found, not
>> related to the problem.
>>
>
> Ahh, yes - that works :-). I haven't done a detailed code review yet as
> you're going to be whacking things around for a bit, but I didn't see any
> obvious styling issues except for:
>
> (pgadmin4) dpage@hal:~/git/pgadmin4$ make check-pep8
> pycodestyle --config=.pycodestyle docs/
> pycodestyle --config=.pycodestyle pkg/
> pycodestyle --config=.pycodestyle web/
> web/pgadmin/tools/sqleditor/__init__.py:440: [E125] continuation line with
> same indent as next logical line
> web/pgadmin/tools/sqleditor/command.py:929: [E501] line too long (80 > 79
> characters)
> web/pgadmin/tools/sqleditor/command.py:977: [W391] blank line at end of
> file
> web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py:53:
> [E501] line too long (92 > 79 characters)
> web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py:74:
> [E501] line too long (80 > 79 characters)
> web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py:81:
> [E501] line too long (97 > 79 characters)
> web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py:83:
> [E501] line too long (84 > 79 characters)
> 1       E125 continuation line with same indent as next logical line
> 5       E501 line too long (80 > 79 characters)
> 1       W391 blank line at end of file
> 7
> make: *** [check-pep8] Error 1
>
> All patches need to pass that (and all other) existing tests before they
> can be committed. Aside from that:
>
>
I ran pep8 checks and JS tests on this patch, however I could not run
python tests due to a problem with chromedriver (working on it), please let
me know if any tests fail.

- When revising patches, please send an updated one for the whole thing,
> rather than incremental ones. Incrementals are more work to apply and don't
> give us any benefit in return.
>
>
The attached patch is a single patch including all old and new increments.

- We need to add a "do you want to continue" warning before actions like
> Execute or EXPLAIN are run, if there are unsaved changes in the grid.
>
> - I think we should make the text in any cells that has been edited bold
> until saved, so the user can see where changes have been made (as they can
> with deleted rows).
>

Both done, new rows are highlighted too.

>
> - If I make two data edits and then delete a row, I get 3 entries in the
> History panel, all showing the same delete. I would actually argue that
> data edit queries that pgAdmin generates should not go into the History at
> all, but maybe they should be added albeit with a flag to say they're
> internal queries and an option to hide them. Thoughts?
>

That was a bug with the existing 'save changes' action of 'View Data', on
which mine is based upon. I fixed the bug, now the queries are shown
correctly. However, the queries are shown in the form in which they are
sent from the backend to the database driver (without parameters - also an
already existing bug in View Data Mode), for example:

INSERT INTO public.kweek (
> media_url, username, text, created_at) VALUES (
> %(media_url)s::character varying, %(username)s::character varying,
> %(text)s::text, %(created_at)s::timestamp without time zone)
>  returning id;
>

I propose two solutions:
1- Hide pgadmin's generated sql from history (in both modes).
2- Show the actual sql query that was executed after the parameters are
plugged in (more understandable and potentially helpful).


> - We need to think about how data editing fits in with transaction
> control. Right now, it seems to happen entirely outside of it - for
> example, I tend to work with auto commit turned off, so my connection sits
> idle-in-transaction following an initial select, and remains un-affected by
> edits. Please think about this and suggest options for us to discuss.
>

I integrated the data editing in the transaction control as you noted. Now
the behavior is as follows:
1- In View Data mode, same existing behavior.
2- In Query Tool mode:
- If auto-commit is on: the modifications are made and commited once save
is pressed.
- If auto-commit is off: the modifications are made as part of the ongoing
transaction (or a new one if no transaction is ongoing), they are not
commited unless the user executes a commit command (or rollback).


>
>> - What documentations or unit tests should I write? any guidelines here
>>>>> would be appreciated.
>>>>>
>>>>
>>> We're aiming to add unit tests to give as much coverage as possible,
>>> focussing on Jasmine, Python/API and then feature tests in that order (fast
>>> -> slow execution, which is important). So we probably want
>>>
>>> - one feature test to do basic end-to-end validation
>>> - Python/API tests to exercise is_query_resultset_updatable,
>>> save_changed_data and anything else that seems relevant.
>>> - Jasmine tests to ensure buttons are enabled/disabled as they should
>>> be, and that primary key and updatability data is tracked properly (this
>>> may not be feasible, but I'd still like it to be investigated and justified
>>> if not).
>>>
>>> We're also a day or two away from committing a new test suite for
>>> exercising CRUD operations and the resulting reverse engineered SQL; if we
>>> can utilise that to test primary_keys.sql, that'd be good.
>>>
>>>
>> I am sorry but I don't understand what should be done exactly in those
>> tests. Could you tell me where I can look at examples for feature tests,
>> Python/API tests and Jasmine tests (preferably for features related to the
>> query tool)?
>>
>
> They're all over the codebase to be honest. Some examples though:
>
> Varions Jasmine tests: web/regression/javascript (e.g. history, slickgrid,
> sqleditor)
> Various API tests: web/pgadmin/tools/sqleditor/tests
> Feature tests: web/pgadmin/feature_tests (e.g. query_tool_*)
>
>
>>
>>
>>> Once the in-place editing works, we'll need to rip out all the code
>>> related to the View/Edit data mode of the query tool. For example, there
>>> will be no need to have the Filter/Sort options any more as the user can
>>> edit the SQL directly (that one may be controversial - it's probably worth
>>> polling the users first). Of course, if they don't want it to be removed,
>>> we'll need to re-think how it works as then we'd have a dialogue that tries
>>> to edit arbitrary SQL strings.
>>>
>>
>> I think it makes more sense for filters to be disabled. I mean since the
>> user is already writing SQL it would be more convenient to just edit it
>> directly.
>>
>
> Well we're not going to just disable them - we'll either remove them, or
> try to make them work. I'm leaning strongly towards just removing that code
> entirely.
>
>
I meant disabling them in the query tool while keeping them in the View
Data mode as the user cannot edit the sql in the View Data mode. Do you
want to remove the feature from both modes completely?


> Good work - thanks!
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Looking forward to your feedback. Thanks !

-- 
*Yosry Muhammad Yosry*

Computer Engineering student,
The Faculty of Engineering,
Cairo University (2021).
Class representative of CMP 2021.
https://www.linkedin.com/in/yosrym93/
diff --git a/web/pgadmin/tools/sqleditor/__init__.py b/web/pgadmin/tools/sqleditor/__init__.py
index 16f7133f..a162eeb8 100644
--- a/web/pgadmin/tools/sqleditor/__init__.py
+++ b/web/pgadmin/tools/sqleditor/__init__.py
@@ -384,6 +384,8 @@ def poll(trans_id):
     rset = None
     has_oids = False
     oids = None
+    additional_messages = None
+    notifies = None
 
     # Check the transaction and connection status
     status, error_msg, conn, trans_obj, session_obj = \
@@ -422,6 +424,22 @@ def poll(trans_id):
 
             st, result = conn.async_fetchmany_2darray(ON_DEMAND_RECORD_COUNT)
 
+            # There may be additional messages even if result is present
+            # eg: Function can provide result as well as RAISE messages
+            messages = conn.messages()
+            if messages:
+                additional_messages = ''.join(messages)
+            notifies = conn.get_notifies()
+
+            # Procedure/Function output may comes in the form of Notices
+            # from the database server, so we need to append those outputs
+            # with the original result.
+            if result is None:
+                result = conn.status_message()
+                if (result != 'SELECT 1' or result != 'SELECT 0') and \
+                   result is not None and additional_messages:
+                    result = additional_messages + result
+
             if st:
                 if 'primary_keys' in session_obj:
                     primary_keys = session_obj['primary_keys']
@@ -438,10 +456,22 @@ def poll(trans_id):
                 )
                 session_obj['client_primary_key'] = client_primary_key
 
-                if columns_info is not None:
+                # If trans_obj is a QueryToolCommand then check for updatable
+                # resultsets and primary keys
+                if isinstance(trans_obj, QueryToolCommand):
+                    trans_obj.check_for_updatable_resultset_and_primary_keys()
+                    pk_names, primary_keys = trans_obj.get_primary_keys()
+                    # If primary_keys exist, add them to the session_obj to
+                    # allow for saving any changes to the data
+                    if primary_keys is not None:
+                        session_obj['primary_keys'] = primary_keys
 
-                    command_obj = pickle.loads(session_obj['command_obj'])
-                    if hasattr(command_obj, 'obj_id'):
+                if columns_info is not None:
+                    # If it is a QueryToolCommand that has obj_id attribute
+                    # then it should also be editable
+                    if hasattr(trans_obj, 'obj_id') and \
+                        (not isinstance(trans_obj, QueryToolCommand) or
+                         trans_obj.can_edit()):
                         # Get the template path for the column
                         template_path = 'columns/sql/#{0}#'.format(
                             conn.manager.version
@@ -449,7 +479,7 @@ def poll(trans_id):
 
                         SQL = render_template(
                             "/".join([template_path, 'nodes.sql']),
-                            tid=command_obj.obj_id,
+                            tid=trans_obj.obj_id,
                             has_oids=True
                         )
                         # rows with attribute not_null
@@ -520,26 +550,8 @@ def poll(trans_id):
         status = 'NotConnected'
         result = error_msg
 
-    # There may be additional messages even if result is present
-    # eg: Function can provide result as well as RAISE messages
-    additional_messages = None
-    notifies = None
-    if status == 'Success':
-        messages = conn.messages()
-        if messages:
-            additional_messages = ''.join(messages)
-        notifies = conn.get_notifies()
-
-    # Procedure/Function output may comes in the form of Notices from the
-    # database server, so we need to append those outputs with the
-    # original result.
-    if status == 'Success' and result is None:
-        result = conn.status_message()
-        if (result != 'SELECT 1' or result != 'SELECT 0') and \
-           result is not None and additional_messages:
-            result = additional_messages + result
-
     transaction_status = conn.transaction_status()
+
     return make_json_response(
         data={
             'status': status, 'result': result,
@@ -741,21 +753,24 @@ def save(trans_id):
 
         manager = get_driver(
             PG_DEFAULT_DRIVER).connection_manager(trans_obj.sid)
-        default_conn = manager.connection(did=trans_obj.did)
+        if hasattr(trans_obj, 'conn_id'):
+            conn = manager.connection(did=trans_obj.did,
+                                      conn_id=trans_obj.conn_id)
+        else:
+            conn = manager.connection(did=trans_obj.did)  # default connection
 
         # Connect to the Server if not connected.
-        if not default_conn.connected():
-            status, msg = default_conn.connect()
+        if not conn.connected():
+            status, msg = conn.connect()
             if not status:
                 return make_json_response(
                     data={'status': status, 'result': u"{}".format(msg)}
                 )
-
         status, res, query_res, _rowid = trans_obj.save(
             changed_data,
             session_obj['columns_info'],
             session_obj['client_primary_key'],
-            default_conn)
+            conn)
     else:
         status = False
         res = error_msg
diff --git a/web/pgadmin/tools/sqleditor/command.py b/web/pgadmin/tools/sqleditor/command.py
index d4b0700f..823e9d47 100644
--- a/web/pgadmin/tools/sqleditor/command.py
+++ b/web/pgadmin/tools/sqleditor/command.py
@@ -19,6 +19,10 @@ from flask import render_template
 from flask_babelex import gettext
 from pgadmin.utils.ajax import forbidden
 from pgadmin.utils.driver import get_driver
+from pgadmin.tools.sqleditor.utils.constant_definition import ASYNC_OK
+from pgadmin.tools.sqleditor.utils.is_query_resultset_updatable \
+    import is_query_resultset_updatable
+from pgadmin.tools.sqleditor.utils.save_changed_data import save_changed_data
 
 from config import PG_DEFAULT_DRIVER
 
@@ -668,244 +672,11 @@ class TableCommand(GridCommand):
         else:
             conn = default_conn
 
-        status = False
-        res = None
-        query_res = dict()
-        count = 0
-        list_of_rowid = []
-        operations = ('added', 'updated', 'deleted')
-        list_of_sql = {}
-        _rowid = None
-
-        if conn.connected():
-
-            # Start the transaction
-            conn.execute_void('BEGIN;')
-
-            # Iterate total number of records to be updated/inserted
-            for of_type in changed_data:
-                # No need to go further if its not add/update/delete operation
-                if of_type not in operations:
-                    continue
-                # if no data to be save then continue
-                if len(changed_data[of_type]) < 1:
-                    continue
-
-                column_type = {}
-                column_data = {}
-                for each_col in columns_info:
-                    if (
-                        columns_info[each_col]['not_null'] and
-                        not columns_info[each_col]['has_default_val']
-                    ):
-                        column_data[each_col] = None
-                        column_type[each_col] =\
-                            columns_info[each_col]['type_name']
-                    else:
-                        column_type[each_col] = \
-                            columns_info[each_col]['type_name']
-
-                # For newly added rows
-                if of_type == 'added':
-                    # Python dict does not honour the inserted item order
-                    # So to insert data in the order, we need to make ordered
-                    # list of added index We don't need this mechanism in
-                    # updated/deleted rows as it does not matter in
-                    # those operations
-                    added_index = OrderedDict(
-                        sorted(
-                            changed_data['added_index'].items(),
-                            key=lambda x: int(x[0])
-                        )
-                    )
-                    list_of_sql[of_type] = []
-
-                    # When new rows are added, only changed columns data is
-                    # sent from client side. But if column is not_null and has
-                    # no_default_value, set column to blank, instead
-                    # of not null which is set by default.
-                    column_data = {}
-                    pk_names, primary_keys = self.get_primary_keys()
-                    has_oids = 'oid' in column_type
-
-                    for each_row in added_index:
-                        # Get the row index to match with the added rows
-                        # dict key
-                        tmp_row_index = added_index[each_row]
-                        data = changed_data[of_type][tmp_row_index]['data']
-                        # Remove our unique tracking key
-                        data.pop(client_primary_key, None)
-                        data.pop('is_row_copied', None)
-                        list_of_rowid.append(data.get(client_primary_key))
-
-                        # Update columns value with columns having
-                        # not_null=False and has no default value
-                        column_data.update(data)
-
-                        sql = render_template(
-                            "/".join([self.sql_path, 'insert.sql']),
-                            data_to_be_saved=column_data,
-                            primary_keys=None,
-                            object_name=self.object_name,
-                            nsp_name=self.nsp_name,
-                            data_type=column_type,
-                            pk_names=pk_names,
-                            has_oids=has_oids
-                        )
-
-                        select_sql = render_template(
-                            "/".join([self.sql_path, 'select.sql']),
-                            object_name=self.object_name,
-                            nsp_name=self.nsp_name,
-                            primary_keys=primary_keys,
-                            has_oids=has_oids
-                        )
-
-                        list_of_sql[of_type].append({
-                            'sql': sql, 'data': data,
-                            'client_row': tmp_row_index,
-                            'select_sql': select_sql
-                        })
-                        # Reset column data
-                        column_data = {}
-
-                # For updated rows
-                elif of_type == 'updated':
-                    list_of_sql[of_type] = []
-                    for each_row in changed_data[of_type]:
-                        data = changed_data[of_type][each_row]['data']
-                        pk = changed_data[of_type][each_row]['primary_keys']
-                        sql = render_template(
-                            "/".join([self.sql_path, 'update.sql']),
-                            data_to_be_saved=data,
-                            primary_keys=pk,
-                            object_name=self.object_name,
-                            nsp_name=self.nsp_name,
-                            data_type=column_type
-                        )
-                        list_of_sql[of_type].append({'sql': sql, 'data': data})
-                        list_of_rowid.append(data.get(client_primary_key))
-
-                # For deleted rows
-                elif of_type == 'deleted':
-                    list_of_sql[of_type] = []
-                    is_first = True
-                    rows_to_delete = []
-                    keys = None
-                    no_of_keys = None
-                    for each_row in changed_data[of_type]:
-                        rows_to_delete.append(changed_data[of_type][each_row])
-                        # Fetch the keys for SQL generation
-                        if is_first:
-                            # We need to covert dict_keys to normal list in
-                            # Python3
-                            # In Python2, it's already a list & We will also
-                            # fetch column names using index
-                            keys = list(
-                                changed_data[of_type][each_row].keys()
-                            )
-                            no_of_keys = len(keys)
-                            is_first = False
-                    # Map index with column name for each row
-                    for row in rows_to_delete:
-                        for k, v in row.items():
-                            # Set primary key with label & delete index based
-                            # mapped key
-                            try:
-                                row[changed_data['columns']
-                                    [int(k)]['name']] = v
-                            except ValueError:
-                                continue
-                            del row[k]
-
-                    sql = render_template(
-                        "/".join([self.sql_path, 'delete.sql']),
-                        data=rows_to_delete,
-                        primary_key_labels=keys,
-                        no_of_keys=no_of_keys,
-                        object_name=self.object_name,
-                        nsp_name=self.nsp_name
-                    )
-                    list_of_sql[of_type].append({'sql': sql, 'data': {}})
-
-            for opr, sqls in list_of_sql.items():
-                for item in sqls:
-                    if item['sql']:
-                        row_added = None
-
-                        # Fetch oids/primary keys
-                        if 'select_sql' in item and item['select_sql']:
-                            status, res = conn.execute_dict(
-                                item['sql'], item['data'])
-                        else:
-                            status, res = conn.execute_void(
-                                item['sql'], item['data'])
-
-                        if not status:
-                            conn.execute_void('ROLLBACK;')
-                            # If we roll backed every thing then update the
-                            # message for each sql query.
-                            for val in query_res:
-                                if query_res[val]['status']:
-                                    query_res[val]['result'] = \
-                                        'Transaction ROLLBACK'
-
-                            # If list is empty set rowid to 1
-                            try:
-                                if list_of_rowid:
-                                    _rowid = list_of_rowid[count]
-                                else:
-                                    _rowid = 1
-                            except Exception:
-                                _rowid = 0
-
-                            return status, res, query_res, _rowid
-
-                        # Select added row from the table
-                        if 'select_sql' in item:
-                            status, sel_res = conn.execute_dict(
-                                item['select_sql'], res['rows'][0])
-
-                            if not status:
-                                conn.execute_void('ROLLBACK;')
-                                # If we roll backed every thing then update
-                                # the message for each sql query.
-                                for val in query_res:
-                                    if query_res[val]['status']:
-                                        query_res[val]['result'] = \
-                                            'Transaction ROLLBACK'
-
-                                # If list is empty set rowid to 1
-                                try:
-                                    if list_of_rowid:
-                                        _rowid = list_of_rowid[count]
-                                    else:
-                                        _rowid = 1
-                                except Exception:
-                                    _rowid = 0
-
-                                return status, sel_res, query_res, _rowid
-
-                            if 'rows' in sel_res and len(sel_res['rows']) > 0:
-                                row_added = {
-                                    item['client_row']: sel_res['rows'][0]}
-
-                        rows_affected = conn.rows_affected()
-
-                        # store the result of each query in dictionary
-                        query_res[count] = {
-                            'status': status,
-                            'result': None if row_added else res,
-                            'sql': sql, 'rows_affected': rows_affected,
-                            'row_added': row_added
-                        }
-
-                        count += 1
-
-            # Commit the transaction if there is no error found
-            conn.execute_void('COMMIT;')
-
-        return status, res, query_res, _rowid
+        return save_changed_data(changed_data=changed_data,
+                                 columns_info=columns_info,
+                                 command_obj=self,
+                                 client_primary_key=client_primary_key,
+                                 conn=conn)
 
 
 class ViewCommand(GridCommand):
@@ -1089,18 +860,88 @@ class QueryToolCommand(BaseCommand, FetchedRowTracker):
         self.auto_rollback = False
         self.auto_commit = True
 
+        # Attributes needed to be able to edit updatable resultselts
+        self.is_updatable_resultset = False
+        self.primary_keys = None
+        self.pk_names = None
+
     def get_sql(self, default_conn=None):
         return None
 
     def get_all_columns_with_order(self, default_conn=None):
         return None
 
+    def get_primary_keys(self):
+        return self.pk_names, self.primary_keys
+
     def can_edit(self):
-        return False
+        return self.is_updatable_resultset
 
     def can_filter(self):
         return False
 
+    def check_for_updatable_resultset_and_primary_keys(self):
+        """
+            This function is used to check whether the last successful query
+            produced updatable results and sets the necessary flags and
+            attributes accordingly
+        """
+        # Fetch the connection object
+        driver = get_driver(PG_DEFAULT_DRIVER)
+        manager = driver.connection_manager(self.sid)
+        conn = manager.connection(did=self.did, conn_id=self.conn_id)
+
+        # Check that the query results are ready first
+        status, result = conn.poll(
+            formatted_exception_msg=True, no_result=True)
+        if status != ASYNC_OK:
+            return
+
+        # Get the path to the sql templates
+        sql_path = 'sqleditor/sql/#{0}#'.format(manager.version)
+
+        self.is_updatable_resultset, self.primary_keys, pk_names, table_oid = \
+            is_query_resultset_updatable(conn, sql_path)
+
+        # Create pk_names attribute in the required format
+        if pk_names is not None:
+            self.pk_names = ''
+
+            for pk_name in pk_names:
+                self.pk_names += driver.qtIdent(conn, pk_name) + ','
+
+            if self.pk_names != '':
+                # Remove last character from the string
+                self.pk_names = self.pk_names[:-1]
+
+        # Add attributes required to be able to update table data
+        if self.is_updatable_resultset:
+            self.__set_updatable_resultset_attributes(sql_path=sql_path,
+                                                      table_oid=table_oid,
+                                                      conn=conn)
+
+    def save(self,
+             changed_data,
+             columns_info,
+             client_primary_key='__temp_PK',
+             default_conn=None):
+        if not self.is_updatable_resultset:
+            return False, gettext('Resultset is not updatable.'), None, None
+        else:
+            driver = get_driver(PG_DEFAULT_DRIVER)
+            if default_conn is None:
+                manager = driver.connection_manager(self.sid)
+                conn = manager.connection(did=self.did, conn_id=self.conn_id)
+            else:
+                conn = default_conn
+
+            return save_changed_data(changed_data=changed_data,
+                                     columns_info=columns_info,
+                                     conn=conn,
+                                     command_obj=self,
+                                     client_primary_key=client_primary_key,
+                                     auto_commit=self.auto_commit)
+
     def set_connection_id(self, conn_id):
         self.conn_id = conn_id
 
@@ -1109,3 +950,28 @@ class QueryToolCommand(BaseCommand, FetchedRowTracker):
 
     def set_auto_commit(self, auto_commit):
         self.auto_commit = auto_commit
+
+    def __set_updatable_resultset_attributes(self, sql_path,
+                                             table_oid, conn):
+        # Set template path for sql scripts and the table object id
+        self.sql_path = sql_path
+        self.obj_id = table_oid
+
+        if conn.connected():
+            # Fetch the Namespace Name and object Name
+            query = render_template(
+                "/".join([self.sql_path, 'objectname.sql']),
+                obj_id=self.obj_id
+            )
+
+            status, result = conn.execute_dict(query)
+            if not status:
+                raise Exception(result)
+
+            self.nsp_name = result['rows'][0]['nspname']
+            self.object_name = result['rows'][0]['relname']
+        else:
+            raise Exception(gettext(
+                'Not connected to server or connection with the server '
+                'has been closed.')
+            )
diff --git a/web/pgadmin/tools/sqleditor/static/css/sqleditor.css b/web/pgadmin/tools/sqleditor/static/css/sqleditor.css
index 86d3defc..1e7bfc22 100644
--- a/web/pgadmin/tools/sqleditor/static/css/sqleditor.css
+++ b/web/pgadmin/tools/sqleditor/static/css/sqleditor.css
@@ -427,3 +427,8 @@ input.editor-checkbox:focus {
 .hide-vertical-scrollbar {
   overflow-y: hidden;
 }
+
+.highlighted_grid_cells {
+  background: #f4f4f4;
+  font-weight: bold;
+}
diff --git a/web/pgadmin/tools/sqleditor/static/js/sqleditor.js b/web/pgadmin/tools/sqleditor/static/js/sqleditor.js
index 6af098b4..5a31b2d9 100644
--- a/web/pgadmin/tools/sqleditor/static/js/sqleditor.js
+++ b/web/pgadmin/tools/sqleditor/static/js/sqleditor.js
@@ -734,6 +734,8 @@ define('tools.querytool', [
     render_grid: function(collection, columns, is_editable, client_primary_key, rows_affected) {
       var self = this;
 
+      self.handler.numberOfModifiedCells = 0;
+
       // This will work as data store and holds all the
       // inserted/updated/deleted data from grid
       self.handler.data_store = {
@@ -1067,6 +1069,14 @@ define('tools.querytool', [
           _pk = args.item[self.client_primary_key] || null, // Unique key to identify row
           column_data = {};
 
+        // Highlight the changed cell
+        self.handler.numberOfModifiedCells++;
+        args.grid.addCellCssStyles(self.handler.numberOfModifiedCells, {
+          [args.row] : {
+            [changed_column]: 'highlighted_grid_cells',
+          },
+        });
+
         // Access to row/cell value after a cell is changed.
         // The purpose is to remove row_id from temp_new_row
         // if new row has primary key instead of [default_value]
@@ -1150,6 +1160,7 @@ define('tools.querytool', [
           'data': item,
         };
         self.handler.data_store.added_index[data_length] = _key;
+
         // Fetch data type & add it for the column
         var temp = {};
         temp[column.name] = _.where(this.columns, {
@@ -1158,6 +1169,15 @@ define('tools.querytool', [
         grid.updateRowCount();
         grid.render();
 
+        // Highlight the first added cell of the new row
+        var row = dataView.getRowByItem(item);
+        self.handler.numberOfModifiedCells++;
+        args.grid.addCellCssStyles(self.handler.numberOfModifiedCells, {
+          [row] : {
+            [column.field]: 'highlighted_grid_cells',
+          },
+        });
+
         // Enable save button
         $('#btn-save').prop('disabled', false);
       }.bind(editor_data));
@@ -1208,9 +1228,11 @@ define('tools.querytool', [
       }
       dataView.setItems(collection, self.client_primary_key);
     },
+
     fetch_next_all: function(cb) {
       this.fetch_next(true, cb);
     },
+
     fetch_next: function(fetch_all, cb) {
       var self = this,
         url = '';
@@ -2376,6 +2398,18 @@ define('tools.querytool', [
         else
           self.can_edit = true;
 
+        /* If the query results are updatable then keep track of newly added
+         * rows
+         */
+        if (self.is_query_tool && self.can_edit) {
+          // keep track of newly added rows
+          self.rows_to_disable = new Array();
+          // Temporarily hold new rows added
+          self.temp_new_rows = new Array();
+          self.has_more_rows = false;
+          self.fetching_rows = false;
+        }
+
         /* If user can filter the data then we should enabled
          * Filter and Limit buttons.
          */
@@ -2818,12 +2852,15 @@ define('tools.querytool', [
        * the ajax call to save the data into the database server.
        * and will open save file dialog conditionally.
        */
-      _save: function(view, controller, save_as) {
+      _save: function(view, controller, save_as=false) {
         var self = this,
           save_data = true;
 
-        // Open save file dialog if query tool
-        if (self.is_query_tool) {
+        // Open save file dialog if query tool and:
+        // - results are not editable
+        // or
+        // - 'save as' is pressed instead of 'save'
+        if (self.is_query_tool && (!self.can_edit || save_as)) {
           var current_file = self.gridView.current_file;
           if (!_.isUndefined(current_file) && !save_as) {
             self._save_file_handler(current_file);
@@ -2852,7 +2889,6 @@ define('tools.querytool', [
         }
 
         if (save_data) {
-
           self.trigger(
             'pgadmin-sqleditor:loading-icon:show',
             gettext('Saving the updated data...')
@@ -2878,6 +2914,13 @@ define('tools.querytool', [
                 data = [];
 
               if (res.data.status) {
+
+                // Remove highlighted cells styling
+                for (let i = 1; i <= self.numberOfModifiedCells; i++)
+                  grid.removeCellCssStyles(i);
+
+                self.numberOfModifiedCells = 0;
+
                 if(is_added) {
                 // Update the rows in a grid after addition
                   dataView.beginUpdate();
@@ -3592,8 +3635,36 @@ define('tools.querytool', [
       // This function will fetch the sql query from the text box
       // and execute the query.
       execute: function(explain_prefix, shouldReconnect=false) {
-        var self = this,
-          sql = '';
+        var self = this;
+
+        // Check if the data grid has any changes before running query
+        // Check if the data grid has any changes before running query
+        if (self.can_edit && _.has(self, 'data_store') &&
+          (_.size(self.data_store.added) ||
+            _.size(self.data_store.updated) ||
+            _.size(self.data_store.deleted))
+        ) {
+          alertify.confirm(gettext('Unsaved changes'),
+            gettext('The data has been modified, but not saved. Are you sure you wish to discard the changes?'),
+            function() {
+              // Do nothing as user do not want to save, just continue
+              self._execute_sql_query(explain_prefix, shouldReconnect);
+            },
+            function() {
+              // Stop, User wants to save
+              return true;
+            }
+          ).set('labels', {
+            ok: gettext('Yes'),
+            cancel: gettext('No'),
+          });
+        } else {
+          self._execute_sql_query(explain_prefix, shouldReconnect);
+        }
+      },
+
+      _execute_sql_query: function(explain_prefix, shouldReconnect) {
+        var self = this, sql = '';
 
         self.has_more_rows = false;
         self.fetching_rows = false;
@@ -3602,8 +3673,8 @@ define('tools.querytool', [
           sql = self.special_sql;
         } else {
           /* If code is selected in the code mirror then execute
-           * the selected part else execute the complete code.
-           */
+          * the selected part else execute the complete code.
+          */
           var selected_code = self.gridView.query_tool_obj.getSelection();
           if (selected_code.length > 0)
             sql = selected_code;
diff --git a/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/11_plus/primary_keys.sql b/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/11_plus/primary_keys.sql
index 1dfb094f..459977e9 100644
--- a/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/11_plus/primary_keys.sql
+++ b/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/11_plus/primary_keys.sql
@@ -1,6 +1,6 @@
 {# ============= Fetch the primary keys for given object id ============= #}
 {% if obj_id %}
-SELECT at.attname, ty.typname
+SELECT at.attname, at.attnum, ty.typname
 FROM pg_attribute at LEFT JOIN pg_type ty ON (ty.oid = at.atttypid)
 WHERE attrelid={{obj_id}}::oid AND attnum = ANY (
     (SELECT con.conkey FROM pg_class rel LEFT OUTER JOIN pg_constraint con ON con.conrelid=rel.oid
diff --git a/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/default/primary_keys.sql b/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/default/primary_keys.sql
index 60d0e56f..a96c928f 100644
--- a/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/default/primary_keys.sql
+++ b/web/pgadmin/tools/sqleditor/templates/sqleditor/sql/default/primary_keys.sql
@@ -1,8 +1,8 @@
 {# ============= Fetch the primary keys for given object id ============= #}
 {% if obj_id %}
-SELECT at.attname, ty.typname
+SELECT at.attname, at.attnum, ty.typname
 FROM pg_attribute at LEFT JOIN pg_type ty ON (ty.oid = at.atttypid)
 WHERE attrelid={{obj_id}}::oid AND attnum = ANY (
     (SELECT con.conkey FROM pg_class rel LEFT OUTER JOIN pg_constraint con ON con.conrelid=rel.oid
     AND con.contype='p' WHERE rel.relkind IN ('r','s','t') AND rel.oid = {{obj_id}}::oid)::oid[])
-{% endif %}
\ No newline at end of file
+{% endif %}
diff --git a/web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py b/web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py
new file mode 100644
index 00000000..a89ecc6a
--- /dev/null
+++ b/web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py
@@ -0,0 +1,97 @@
+##########################################################################
+#
+# pgAdmin 4 - PostgreSQL Tools
+#
+# Copyright (C) 2013 - 2019, The pgAdmin Development Team
+# This software is released under the PostgreSQL Licence
+#
+##########################################################################
+
+"""
+    Check if the result-set of a query is updatable, A resultset is
+    updatable (as of this version) if:
+        - All columns belong to the same table.
+        - All the primary key columns of the table are present in the resultset
+        - No duplicate columns
+"""
+from flask import render_template
+try:
+    from collections import OrderedDict
+except ImportError:
+    from ordereddict import OrderedDict
+
+
+def is_query_resultset_updatable(conn, sql_path):
+    """
+        This function is used to check whether the last successful query
+        produced updatable results.
+
+        Args:
+            conn: Connection object.
+            sql_path: the path to the sql templates.
+    """
+    columns_info = conn.get_column_info()
+
+    # Fetch the column info
+    if columns_info is None or len(columns_info) < 1:
+        return False, None, None, None
+
+    # First check that all the columns belong to a single table
+    table_oid = columns_info[0]['table_oid']
+    for column in columns_info:
+        if column['table_oid'] != table_oid:
+            return False, None, None, None
+
+    # Check for duplicate columns
+    column_numbers = \
+        [col['table_column'] for col in columns_info]
+    is_duplicate_columns = len(column_numbers) != len(set(column_numbers))
+    if is_duplicate_columns:
+        return False, None, None, None
+
+    if conn.connected():
+        # Then check that all the primary keys of the table are present
+        # and no primary keys are renamed
+        # (or other columns renamed to be like primary keys)
+        query = render_template(
+            "/".join([sql_path, 'primary_keys.sql']),
+            obj_id=table_oid
+        )
+        status, result = conn.execute_dict(query)
+        if not status:
+            return False, None, None, None
+
+        primary_keys_columns = []
+        primary_keys = OrderedDict()
+        pk_names = []
+
+        for row in result['rows']:
+            primary_keys[row['attname']] = row['typname']
+            primary_keys_columns.append({
+                'name': row['attname'],
+                'column_number': row['attnum']
+            })
+            pk_names.append(row['attname'])
+
+        # Check that all primary keys exist and that all of them are not
+        # renamed and other columns are not renamed to primary key names
+        for pk in primary_keys_columns:
+            pk_exists = False
+            for col in columns_info:
+                if col['table_column'] == pk['column_number']:
+                    pk_exists = True
+                    # If the primary key column is renamed
+                    if col['display_name'] != pk['name']:
+                        return False, None, None, None
+                # If a normal column is renamed to a primary key column name
+                elif col['display_name'] == pk['name']:
+                    return False, None, None, None
+
+            if not pk_exists:
+                return False, None, None, None
+
+        # If the for loop exited without returning from the function then
+        # all primary keys exist without being renamed
+        return True, primary_keys, pk_names, table_oid
+    else:
+        return False, None, None, None
diff --git a/web/pgadmin/tools/sqleditor/utils/save_changed_data.py b/web/pgadmin/tools/sqleditor/utils/save_changed_data.py
new file mode 100644
index 00000000..44540eee
--- /dev/null
+++ b/web/pgadmin/tools/sqleditor/utils/save_changed_data.py
@@ -0,0 +1,271 @@
+##########################################################################
+#
+# pgAdmin 4 - PostgreSQL Tools
+#
+# Copyright (C) 2013 - 2019, The pgAdmin Development Team
+# This software is released under the PostgreSQL Licence
+#
+##########################################################################
+
+from flask import render_template
+from pgadmin.tools.sqleditor.utils.constant_definition import TX_STATUS_IDLE
+try:
+    from collections import OrderedDict
+except ImportError:
+    from ordereddict import OrderedDict
+
+
+def save_changed_data(changed_data, columns_info, conn, command_obj,
+                      client_primary_key, auto_commit=True):
+    """
+    This function is used to save the data into the database.
+    Depending on condition it will either update or insert the
+    new row into the database.
+
+    Args:
+        changed_data: Contains data to be saved
+        command_obj: The transaction object (command_obj or trans_obj)
+        conn: The connection object
+        columns_info: session_obj['columns_info']
+        client_primary_key: session_obj['client_primary_key']
+        auto_commit: If the changes should be commited automatically.
+    """
+    status = False
+    res = None
+    query_res = dict()
+    count = 0
+    list_of_rowid = []
+    operations = ('added', 'updated', 'deleted')
+    list_of_sql = {}
+    _rowid = None
+
+    if conn.connected():
+
+        # Start the transaction if the session is idle
+        if conn.transaction_status() == TX_STATUS_IDLE:
+            conn.execute_void('BEGIN;')
+
+        # Iterate total number of records to be updated/inserted
+        for of_type in changed_data:
+            # No need to go further if its not add/update/delete operation
+            if of_type not in operations:
+                continue
+            # if no data to be save then continue
+            if len(changed_data[of_type]) < 1:
+                continue
+
+            column_type = {}
+            column_data = {}
+            for each_col in columns_info:
+                if (
+                    columns_info[each_col]['not_null'] and
+                    not columns_info[each_col]['has_default_val']
+                ):
+                    column_data[each_col] = None
+                    column_type[each_col] = \
+                        columns_info[each_col]['type_name']
+                else:
+                    column_type[each_col] = \
+                        columns_info[each_col]['type_name']
+
+            # For newly added rows
+            if of_type == 'added':
+                # Python dict does not honour the inserted item order
+                # So to insert data in the order, we need to make ordered
+                # list of added index We don't need this mechanism in
+                # updated/deleted rows as it does not matter in
+                # those operations
+                added_index = OrderedDict(
+                    sorted(
+                        changed_data['added_index'].items(),
+                        key=lambda x: int(x[0])
+                    )
+                )
+                list_of_sql[of_type] = []
+
+                # When new rows are added, only changed columns data is
+                # sent from client side. But if column is not_null and has
+                # no_default_value, set column to blank, instead
+                # of not null which is set by default.
+                column_data = {}
+                pk_names, primary_keys = command_obj.get_primary_keys()
+                has_oids = 'oid' in column_type
+
+                for each_row in added_index:
+                    # Get the row index to match with the added rows
+                    # dict key
+                    tmp_row_index = added_index[each_row]
+                    data = changed_data[of_type][tmp_row_index]['data']
+                    # Remove our unique tracking key
+                    data.pop(client_primary_key, None)
+                    data.pop('is_row_copied', None)
+                    list_of_rowid.append(data.get(client_primary_key))
+
+                    # Update columns value with columns having
+                    # not_null=False and has no default value
+                    column_data.update(data)
+
+                    sql = render_template(
+                        "/".join([command_obj.sql_path, 'insert.sql']),
+                        data_to_be_saved=column_data,
+                        primary_keys=None,
+                        object_name=command_obj.object_name,
+                        nsp_name=command_obj.nsp_name,
+                        data_type=column_type,
+                        pk_names=pk_names,
+                        has_oids=has_oids
+                    )
+
+                    select_sql = render_template(
+                        "/".join([command_obj.sql_path, 'select.sql']),
+                        object_name=command_obj.object_name,
+                        nsp_name=command_obj.nsp_name,
+                        primary_keys=primary_keys,
+                        has_oids=has_oids
+                    )
+
+                    list_of_sql[of_type].append({
+                        'sql': sql, 'data': data,
+                        'client_row': tmp_row_index,
+                        'select_sql': select_sql
+                    })
+                    # Reset column data
+                    column_data = {}
+
+            # For updated rows
+            elif of_type == 'updated':
+                list_of_sql[of_type] = []
+                for each_row in changed_data[of_type]:
+                    data = changed_data[of_type][each_row]['data']
+                    pk = changed_data[of_type][each_row]['primary_keys']
+                    sql = render_template(
+                        "/".join([command_obj.sql_path, 'update.sql']),
+                        data_to_be_saved=data,
+                        primary_keys=pk,
+                        object_name=command_obj.object_name,
+                        nsp_name=command_obj.nsp_name,
+                        data_type=column_type
+                    )
+                    list_of_sql[of_type].append({'sql': sql, 'data': data})
+                    list_of_rowid.append(data.get(client_primary_key))
+
+            # For deleted rows
+            elif of_type == 'deleted':
+                list_of_sql[of_type] = []
+                is_first = True
+                rows_to_delete = []
+                keys = None
+                no_of_keys = None
+                for each_row in changed_data[of_type]:
+                    rows_to_delete.append(changed_data[of_type][each_row])
+                    # Fetch the keys for SQL generation
+                    if is_first:
+                        # We need to covert dict_keys to normal list in
+                        # Python3
+                        # In Python2, it's already a list & We will also
+                        # fetch column names using index
+                        keys = list(
+                            changed_data[of_type][each_row].keys()
+                        )
+                        no_of_keys = len(keys)
+                        is_first = False
+                # Map index with column name for each row
+                for row in rows_to_delete:
+                    for k, v in row.items():
+                        # Set primary key with label & delete index based
+                        # mapped key
+                        try:
+                            row[changed_data['columns']
+                                            [int(k)]['name']] = v
+                        except ValueError:
+                            continue
+                        del row[k]
+
+                sql = render_template(
+                    "/".join([command_obj.sql_path, 'delete.sql']),
+                    data=rows_to_delete,
+                    primary_key_labels=keys,
+                    no_of_keys=no_of_keys,
+                    object_name=command_obj.object_name,
+                    nsp_name=command_obj.nsp_name
+                )
+                list_of_sql[of_type].append({'sql': sql, 'data': {}})
+
+        for opr, sqls in list_of_sql.items():
+            for item in sqls:
+                if item['sql']:
+                    row_added = None
+
+                    # Fetch oids/primary keys
+                    if 'select_sql' in item and item['select_sql']:
+                        status, res = conn.execute_dict(
+                            item['sql'], item['data'])
+                    else:
+                        status, res = conn.execute_void(
+                            item['sql'], item['data'])
+
+                    if not status:
+                        conn.execute_void('ROLLBACK;')
+                        # If we roll backed every thing then update the
+                        # message for each sql query.
+                        for val in query_res:
+                            if query_res[val]['status']:
+                                query_res[val]['result'] = \
+                                    'Transaction ROLLBACK'
+
+                        # If list is empty set rowid to 1
+                        try:
+                            if list_of_rowid:
+                                _rowid = list_of_rowid[count]
+                            else:
+                                _rowid = 1
+                        except Exception:
+                            _rowid = 0
+
+                        return status, res, query_res, _rowid
+
+                    # Select added row from the table
+                    if 'select_sql' in item:
+                        status, sel_res = conn.execute_dict(
+                            item['select_sql'], res['rows'][0])
+
+                        if not status:
+                            conn.execute_void('ROLLBACK;')
+                            # If we roll backed every thing then update
+                            # the message for each sql query.
+                            for val in query_res:
+                                if query_res[val]['status']:
+                                    query_res[val]['result'] = \
+                                        'Transaction ROLLBACK'
+
+                            # If list is empty set rowid to 1
+                            try:
+                                if list_of_rowid:
+                                    _rowid = list_of_rowid[count]
+                                else:
+                                    _rowid = 1
+                            except Exception:
+                                _rowid = 0
+
+                            return status, sel_res, query_res, _rowid
+
+                        if 'rows' in sel_res and len(sel_res['rows']) > 0:
+                            row_added = {
+                                item['client_row']: sel_res['rows'][0]}
+
+                    rows_affected = conn.rows_affected()
+                    # store the result of each query in dictionary
+                    query_res[count] = {
+                        'status': status,
+                        'result': None if row_added else res,
+                        'sql': item['sql'], 'rows_affected': rows_affected,
+                        'row_added': row_added
+                    }
+
+                    count += 1
+
+        # Commit the transaction if there is no error found
+        if auto_commit:
+            conn.execute_void('COMMIT;')
+
+    return status, res, query_res, _rowid
diff --git a/web/pgadmin/tools/sqleditor/utils/start_running_query.py b/web/pgadmin/tools/sqleditor/utils/start_running_query.py
index a5399774..ece11f9c 100644
--- a/web/pgadmin/tools/sqleditor/utils/start_running_query.py
+++ b/web/pgadmin/tools/sqleditor/utils/start_running_query.py
@@ -45,6 +45,9 @@ class StartRunningQuery:
         if type(session_obj) is Response:
             return session_obj
 
+        # Remove any existing primary keys in session_obj
+        session_obj.pop('primary_keys', None)
+
         transaction_object = pickle.loads(session_obj['command_obj'])
         can_edit = False
         can_filter = False

Reply via email to