Hackers, Regarding #5797 <https://github.com/pgadmin-org/pgadmin4/issues/5797>: Full query result is being loaded into RAM despite ON_DEMAND_RECORD_COUNT=1000
pgAdmin uses the *Async cursor* to fetch the data in the query tool, which basically fetches and stores the entire result on the client side, in our case, the pgAdmin server. So, if the query result is very large, it uses more memory on the pgAdmin server. We use the ON_DEMAND_RECORD_COUNT to fetch the partial data from the cursor (which is already transferred to the pgAdmin server) and show it on the UI. To overcome this, we can use the* Async Server Cursor*, which transfers data from the Postgres Server to the client (pgAdmin server) on demand. This will reduce memory consumption and improve the performance. There are some downsides, too, 1. The *Server Cursor* does not return the Total number of rows. - In this case, we will have a problem with pagination. We can either just show the *next page* button in pagination and hide the Last page, as we will not know the exact pages, so on clicking on the next button, we will show the result if it exists, OR we can use infinite scrolling for the Server Cursor. 2. The *Server Cursor* is less efficient for the small query results as it takes more commands to receive the results. - We can add one option in the query tool to run the query with either the *Server or Client cursor*. Thanks, Khushboo