oliverbell-klv opened a new issue, #34736:
URL: https://github.com/apache/superset/issues/34736

   ### Bug description
   
   ### Description
   
   We have a Snowflake connection that works fine in SQL Lab (queries succeed).
   When creating a new dataset or running “Test Connection,” Superset fails 
with:
   
   ```
   An Error Occurred
   Unable to load columns for the selected table. Please select a different 
table.
   
   ```
   
   Logs show underlying errors from the Snowflake connector when trying to 
fetch staged results:
   
   ```
   HTTPSConnectionPool(host='<Snowflake staging S3 bucket in us-west-2>', 
port=443):
   Max retries exceeded … Remote end closed connection without response
   ```
   
   ### Screenshots
   
   <img width="1719" height="863" alt="Image" 
src="https://github.com/user-attachments/assets/34edbfe1-77ee-4c76-8c18-c082ec672498";
 />
   
   ### Repro steps
   
   1. Go to Datasets
   3. Click + Dataset
   5. Pick Snowflake DB connection
   7. Choose schema + table
   9. Observe error
   
   ### Expected
   
   Columns load, Test Connection succeeds.
   
   ### Actual
   
   - Dataset creation fails.
   - Test Connection fails.
   - SQL Lab queries continue to work (likely because small results don’t hit 
S3 staging).
   
   ### Environment
   
   - Superset 4.1.1 (dockerized, AWS ECS Fargate)
   - Python 3.9 (default in base image)
   - Snowflake connector: 3.16.0
   - Snowflake SQLAlchemy: 1.7.6
   - Snowflake region: AWS us-west-2
   
   ### Troubleshooting performed
   
   - Confirmed Snowflake network policy allows our NAT egress IPs.
   - Verified no failed logins in Snowflake login history (issue occurs before 
auth).
   - Increased Superset/Gunicorn/ALB timeouts.
   - Disabled proxy variables, set NO_PROXY for Snowflake/AWS domains.
   - Tried connector options: ocsp_fail_open, insecure_mode, session params 
(CLIENT_PREFETCH_THREADS, CLIENT_RESULT_CHUNK_SIZE, USE_S3_REGIONAL_URL, etc.).
   - Added curl sidecar: Snowflake account host + generic S3 reachable, but 
staging S3 bucket sometimes fails.
   - Tested with minimal 40-row tables - still fails
   - Confirmed issue is specific to metadata/reflection queries, not query size
   
   Why this might be Superset-related
   
   - SQL Lab queries work, but inspector/metadata queries consistently fail.
   - Looks like a difference in how Superset uses the connector for 
reflection/metadata (larger results, staging) vs. SQL Lab.
   - Want to confirm if this is a known issue with staging downloads in 
Superset’s Snowflake integration, and if there are recommended config flags / 
retries / version pins.
   
   ### Screenshots/recordings
   
   _No response_
   
   ### Superset version
   
   master / latest-dev
   
   ### Python version
   
   3.9
   
   ### Node version
   
   16
   
   ### Browser
   
   Chrome
   
   ### Additional context
   
   _No response_
   
   ### Checklist
   
   - [x] I have searched Superset docs and Slack and didn't find a solution to 
my problem.
   - [x] I have searched the GitHub issue tracker and didn't find a similar bug 
report.
   - [x] I have checked Superset's logs for errors and if I found a relevant 
Python stacktrace, I included it here as text in the "additional context" 
section.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org
For additional commands, e-mail: notifications-h...@superset.apache.org

Reply via email to