[jira] [Updated] (AIRFLOW-2827) Tasks that fail with spurious Celery issues are not retried

2018-07-30 Thread James Davidheiser (JIRA)


 [ 
https://issues.apache.org/jira/browse/AIRFLOW-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Davidheiser updated AIRFLOW-2827:
---
Description: 
We have a DAG with ~500 tasks, running on Airflow set up in Kubernetes with 
RabbitMQ using a setup derived pretty heavily from 
[https://github.com/mumoshu/kube-airflow.]  Occasionally, we will hit some 
spurious Celery execution failures (possibly related to 
https://issues.apache.org/jira/browse/AIRFLOW-2011 ), resulting in the Worker 
throwing errors that look like this:

 

 
{code:java}
[2018-07-30 11:04:26,812: ERROR/ForkPoolWorker-9] Task 
airflow.executors.celery_executor.execute_command[462de800-ad3f-4151-90bf-9155cc6c66f6]
 raised unexpected: AirflowException('Celery command failed',)
 Traceback (most recent call last):
   File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 382, 
in trace_task
     R = retval = fun(*args, **kwargs)
   File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 641, 
in _protected_call_
     return self.run(*args, **kwargs)
   File 
"/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py", 
line 55, in execute_command
     raise AirflowException('Celery command failed')
 AirflowException: Celery command failed{code}
 

 

 

When these tasks fail, they send a "task failed" email that has very little 
information about the state of the task failure.  The logs for the task run are 
empty, because the task never actually did anything and the error message was 
generated by the worker.  Also, the task does not retry, so if something goes 
wrong with Celery, the task simply fails outright instead of trying again.

 

This may be the same issue reported in  
https://issues.apache.org/jira/browse/AIRFLOW-1844, but I am not sure because 
there is not much detail there.

  was:
We have a DAG with ~500 tasks, running on Airflow set up in Kubernetes with 
RabbitMQ using a setup derived pretty heavily from 
[https://github.com/mumoshu/kube-airflow.]  Occasionally, we will hit some 
spurious Celery execution failures (possibly related to #2011 ), resulting in 
the Worker throwing errors that look like this:

 

```[2018-07-30 11:04:26,812: ERROR/ForkPoolWorker-9] Task 
airflow.executors.celery_executor.execute_command[462de800-ad3f-4151-90bf-9155cc6c66f6]
 raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 382, 
in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 641, 
in __protected_call__
    return self.run(*args, **kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py", 
line 55, in execute_command
    raise AirflowException('Celery command failed')
AirflowException: Celery command failed```

 

When these tasks fail, they send a "task failed" email that has very little 
information about the state of the task failure.  The logs for the task run are 
empty, because the task never actually did anything and the error message was 
generated by the worker.  Also, the task does not retry, so if something goes 
wrong with Celery, the task simply fails outright instead of trying again.

 

This may be the same issue reported in #1844, but I am not sure because there 
is not much detail there.


> Tasks that fail with spurious Celery issues are not retried
> ---
>
> Key: AIRFLOW-2827
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2827
> Project: Apache Airflow
>  Issue Type: Bug
>Reporter: James Davidheiser
>Priority: Major
>
> We have a DAG with ~500 tasks, running on Airflow set up in Kubernetes with 
> RabbitMQ using a setup derived pretty heavily from 
> [https://github.com/mumoshu/kube-airflow.]  Occasionally, we will hit some 
> spurious Celery execution failures (possibly related to 
> https://issues.apache.org/jira/browse/AIRFLOW-2011 ), resulting in the Worker 
> throwing errors that look like this:
>  
>  
> {code:java}
> [2018-07-30 11:04:26,812: ERROR/ForkPoolWorker-9] Task 
> airflow.executors.celery_executor.execute_command[462de800-ad3f-4151-90bf-9155cc6c66f6]
>  raised unexpected: AirflowException('Celery command failed',)
>  Traceback (most recent call last):
>    File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 
> 382, in trace_task
>      R = retval = fun(*args, **kwargs)
>    File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 
> 641, in _protected_call_
>      return self.run(*args, **kwargs)
>    File 
> "/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py",
>  line 55, in execute_command
>      raise AirflowException('Celery command failed')
>  

[jira] [Created] (AIRFLOW-2827) Tasks that fail with spurious Celery issues are not retried

2018-07-30 Thread James Davidheiser (JIRA)
James Davidheiser created AIRFLOW-2827:
--

 Summary: Tasks that fail with spurious Celery issues are not 
retried
 Key: AIRFLOW-2827
 URL: https://issues.apache.org/jira/browse/AIRFLOW-2827
 Project: Apache Airflow
  Issue Type: Wish
Reporter: James Davidheiser


We have a DAG with ~500 tasks, running on Airflow set up in Kubernetes with 
RabbitMQ using a setup derived pretty heavily from 
[https://github.com/mumoshu/kube-airflow.]  Occasionally, we will hit some 
spurious Celery execution failures (possibly related to #2011 ), resulting in 
the Worker throwing errors that look like this:

 

```[2018-07-30 11:04:26,812: ERROR/ForkPoolWorker-9] Task 
airflow.executors.celery_executor.execute_command[462de800-ad3f-4151-90bf-9155cc6c66f6]
 raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 382, 
in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 641, 
in __protected_call__
    return self.run(*args, **kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py", 
line 55, in execute_command
    raise AirflowException('Celery command failed')
AirflowException: Celery command failed```

 

When these tasks fail, they send a "task failed" email that has very little 
information about the state of the task failure.  The logs for the task run are 
empty, because the task never actually did anything and the error message was 
generated by the worker.  Also, the task does not retry, so if something goes 
wrong with Celery, the task simply fails outright instead of trying again.

 

This may be the same issue reported in #1844, but I am not sure because there 
is not much detail there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-2011) Airflow ampq pool maintains dead connections

2018-07-30 Thread James Davidheiser (JIRA)


[ 
https://issues.apache.org/jira/browse/AIRFLOW-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562316#comment-16562316
 ] 

James Davidheiser commented on AIRFLOW-2011:


Confirming that I am also running into this error - can this configuration 
change be made in the [celery] section of airflow.cfg?

> Airflow ampq pool maintains dead connections
> 
>
> Key: AIRFLOW-2011
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2011
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: celery, scheduler
>Affects Versions: 1.9.1
> Environment: OS: Ubuntu 16.04 LTS (debian)
> Python: 3.6.3
> Airflow: 1.9.1rc1
>Reporter: Kevin Reilly
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> Airflow scheduler deadlocks on queue-up for tasks
> [2018-01-08 07:01:09,315] \{{celery_executor.py:101}} ERROR - Error syncing 
> the celery executor, ignoring it:
> [2018-01-08 07:01:09,315] \{{celery_executor.py:102}} ERROR - [Errno 104] 
> Connection reset by peer
> Traceback (most recent call last):
> File 
> "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py",
>  line 83, in
> state = async.state
> File "/usr/local/lib/python3.6/dist-packages/celery/result.py", line 436, in 
> state
> return self._get_task_meta()['status']
> File "/usr/local/lib/python3.6/dist-packages/celery/result.py", line 375, in 
> _get_task_meta
> return self._maybe_set_cache(self.backend.get_task_meta(self.id))
> File "/usr/local/lib/python3.6/dist-packages/celery/backends/rpc.py", line 
> 244, in get_task_meta
> for acc in self._slurp_from_queue(task_id, self.accept, backlog_limit):
> File "/usr/local/lib/python3.6/dist-packages/celery/backends/rpc.py", line 
> 278, in
> binding.declare()
> File "/usr/local/lib/python3.6/dist-packages/kombu/entity.py", line 605, in 
> declare
> self._create_queue(nowait=nowait, channel=channel)
> File "/usr/local/lib/python3.6/dist-packages/kombu/entity.py", line 614, in 
> _create_queue
> self.queue_declare(nowait=nowait, passive=False, channel=channel)
> File "/usr/local/lib/python3.6/dist-packages/kombu/entity.py", line 649, in 
> queue_declare
> nowait=nowait,
> File "/usr/local/lib/python3.6/dist-packages/amqp/channel.py", line 1147, in 
> queue_declare
> nowait, arguments),
> File "/usr/local/lib/python3.6/dist-packages/amqp/abstract_channel.py", line 
> 50, in send_method
> conn.frame_writer(1, self.channel_id, sig, args, content)
> File "/usr/local/lib/python3.6/dist-packages/amqp/method_framing.py", line 
> 166, in write_frame
> write(view[:offset])
> File "/usr/local/lib/python3.6/dist-packages/amqp/transport.py", line 258, in 
> write
> self._write(s)
> ConnectionResetError: [Errno 104] Connection reset by peer
> If I edit the celery settings file and add an argument to set
> broker_pool_limit=None
> editing default_celery.py
> and adding
> "broker_pool_limit":None,
> between lines 37 and 38 would solve the issue.  This particular setting 
> requires celery to create a new ampq connection each time it needs one, 
> thereby preventing the rabbitmq server from disconnecting the connection 
> where the client is unaware and leaving broken sockets open for use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-2363) S3 remote logging appending tuple instead of str

2018-04-26 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454772#comment-16454772
 ] 

James Davidheiser commented on AIRFLOW-2363:


I tested again after updating to work around some other bugs and updating to 
match the latest suggested celery configuration options, and confirmed that the 
change in [https://github.com/apache/incubator-airflow/pull/3259] works for me.

> S3 remote logging appending tuple instead of str
> 
>
> Key: AIRFLOW-2363
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2363
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: logging
>Reporter: Kyle Hamlin
>Priority: Major
> Fix For: 1.10.0
>
>
> A recent merge into master that added support for Elasticsearch logging seems 
> to have broken S3 logging by returning a tuple instead of a string.
> [https://github.com/apache/incubator-airflow/commit/ec38ba9594395de04ec932481212a86fbe9ae107#diff-0442332ecbe42ebbf426911c68d8cd4aR128]
>  
> following errors thrown:
>  
> *Session NoneType error*
>  Traceback (most recent call last):
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/s3_task_handler.py",
>  line 171, in s3_write
>      encrypt=configuration.conf.getboolean('core', 'ENCRYPT_S3_LOGS'),
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 274, in load_string
>      encrypt=encrypt)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 313, in load_bytes
>      client = self.get_conn()
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 34, in get_conn
>      return self.get_client_type('s3')
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 151, in get_client_type
>      session, endpoint_url = self._get_credentials(region_name)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 97, in _get_credentials
>      connection_object = self.get_connection(self.aws_conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 82, in get_connection
>      conn = random.choice(cls.get_connections(conn_id))
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 77, in get_connections
>      conns = cls._get_connections_from_db(conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 72, in wrapper
>      with create_session() as session:
>    File "/usr/local/lib/python3.6/contextlib.py", line 81, in __enter__
>      return next(self.gen)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 41, in create_session
>      session = settings.Session()
>  TypeError: 'NoneType' object is not callable
>  
> *TypeError must be str not tuple*
>  [2018-04-16 18:37:28,200] ERROR in app: Exception on 
> /admin/airflow/get_logs_with_metadata [GET]
>  Traceback (most recent call last):
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1982, in 
> wsgi_app
>      response = self.full_dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1614, in 
> full_dispatch_request
>      rv = self.handle_user_exception(e)
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1517, in 
> handle_user_exception
>      reraise(exc_type, exc_value, tb)
>    File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 33, 
> in reraise
>      raise value
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1612, in 
> full_dispatch_request
>      rv = self.dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1598, in 
> dispatch_request
>      return self.view_functions[rule.endpoint](**req.view_args)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 69, in inner
>      return self._run_view(f, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 368, in _run_view
>      return fn(self, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_login.py", line 755, in 
> decorated_view
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/utils.py", line 
> 269, in wrapper
>      return f(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 74, in wrapper
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/views.py", line 
> 770, in get_logs_with_metadata
>      logs, metadatas = handler.read(ti, try_number, metadata=metadata)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/file_task_handler.py",
>  

[jira] [Assigned] (AIRFLOW-2363) S3 remote logging appending tuple instead of str

2018-04-26 Thread James Davidheiser (JIRA)

 [ 
https://issues.apache.org/jira/browse/AIRFLOW-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Davidheiser reassigned AIRFLOW-2363:
--

Assignee: (was: James Davidheiser)

> S3 remote logging appending tuple instead of str
> 
>
> Key: AIRFLOW-2363
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2363
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: logging
>Reporter: Kyle Hamlin
>Priority: Major
> Fix For: 1.10.0
>
>
> A recent merge into master that added support for Elasticsearch logging seems 
> to have broken S3 logging by returning a tuple instead of a string.
> [https://github.com/apache/incubator-airflow/commit/ec38ba9594395de04ec932481212a86fbe9ae107#diff-0442332ecbe42ebbf426911c68d8cd4aR128]
>  
> following errors thrown:
>  
> *Session NoneType error*
>  Traceback (most recent call last):
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/s3_task_handler.py",
>  line 171, in s3_write
>      encrypt=configuration.conf.getboolean('core', 'ENCRYPT_S3_LOGS'),
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 274, in load_string
>      encrypt=encrypt)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 313, in load_bytes
>      client = self.get_conn()
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 34, in get_conn
>      return self.get_client_type('s3')
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 151, in get_client_type
>      session, endpoint_url = self._get_credentials(region_name)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 97, in _get_credentials
>      connection_object = self.get_connection(self.aws_conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 82, in get_connection
>      conn = random.choice(cls.get_connections(conn_id))
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 77, in get_connections
>      conns = cls._get_connections_from_db(conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 72, in wrapper
>      with create_session() as session:
>    File "/usr/local/lib/python3.6/contextlib.py", line 81, in __enter__
>      return next(self.gen)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 41, in create_session
>      session = settings.Session()
>  TypeError: 'NoneType' object is not callable
>  
> *TypeError must be str not tuple*
>  [2018-04-16 18:37:28,200] ERROR in app: Exception on 
> /admin/airflow/get_logs_with_metadata [GET]
>  Traceback (most recent call last):
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1982, in 
> wsgi_app
>      response = self.full_dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1614, in 
> full_dispatch_request
>      rv = self.handle_user_exception(e)
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1517, in 
> handle_user_exception
>      reraise(exc_type, exc_value, tb)
>    File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 33, 
> in reraise
>      raise value
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1612, in 
> full_dispatch_request
>      rv = self.dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1598, in 
> dispatch_request
>      return self.view_functions[rule.endpoint](**req.view_args)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 69, in inner
>      return self._run_view(f, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 368, in _run_view
>      return fn(self, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_login.py", line 755, in 
> decorated_view
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/utils.py", line 
> 269, in wrapper
>      return f(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 74, in wrapper
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/views.py", line 
> 770, in get_logs_with_metadata
>      logs, metadatas = handler.read(ti, try_number, metadata=metadata)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/file_task_handler.py",
>  line 165, in read
>      logs[i] += log
>  TypeError: must be str, not tuple



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AIRFLOW-2363) S3 remote logging appending tuple instead of str

2018-04-26 Thread James Davidheiser (JIRA)

 [ 
https://issues.apache.org/jira/browse/AIRFLOW-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Davidheiser reassigned AIRFLOW-2363:
--

Assignee: James Davidheiser  (was: Kevin Yang)

> S3 remote logging appending tuple instead of str
> 
>
> Key: AIRFLOW-2363
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2363
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: logging
>Reporter: Kyle Hamlin
>Assignee: James Davidheiser
>Priority: Major
> Fix For: 1.10.0
>
>
> A recent merge into master that added support for Elasticsearch logging seems 
> to have broken S3 logging by returning a tuple instead of a string.
> [https://github.com/apache/incubator-airflow/commit/ec38ba9594395de04ec932481212a86fbe9ae107#diff-0442332ecbe42ebbf426911c68d8cd4aR128]
>  
> following errors thrown:
>  
> *Session NoneType error*
>  Traceback (most recent call last):
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/s3_task_handler.py",
>  line 171, in s3_write
>      encrypt=configuration.conf.getboolean('core', 'ENCRYPT_S3_LOGS'),
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 274, in load_string
>      encrypt=encrypt)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 313, in load_bytes
>      client = self.get_conn()
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 34, in get_conn
>      return self.get_client_type('s3')
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 151, in get_client_type
>      session, endpoint_url = self._get_credentials(region_name)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 97, in _get_credentials
>      connection_object = self.get_connection(self.aws_conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 82, in get_connection
>      conn = random.choice(cls.get_connections(conn_id))
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 77, in get_connections
>      conns = cls._get_connections_from_db(conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 72, in wrapper
>      with create_session() as session:
>    File "/usr/local/lib/python3.6/contextlib.py", line 81, in __enter__
>      return next(self.gen)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 41, in create_session
>      session = settings.Session()
>  TypeError: 'NoneType' object is not callable
>  
> *TypeError must be str not tuple*
>  [2018-04-16 18:37:28,200] ERROR in app: Exception on 
> /admin/airflow/get_logs_with_metadata [GET]
>  Traceback (most recent call last):
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1982, in 
> wsgi_app
>      response = self.full_dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1614, in 
> full_dispatch_request
>      rv = self.handle_user_exception(e)
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1517, in 
> handle_user_exception
>      reraise(exc_type, exc_value, tb)
>    File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 33, 
> in reraise
>      raise value
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1612, in 
> full_dispatch_request
>      rv = self.dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1598, in 
> dispatch_request
>      return self.view_functions[rule.endpoint](**req.view_args)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 69, in inner
>      return self._run_view(f, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 368, in _run_view
>      return fn(self, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_login.py", line 755, in 
> decorated_view
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/utils.py", line 
> 269, in wrapper
>      return f(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 74, in wrapper
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/views.py", line 
> 770, in get_logs_with_metadata
>      logs, metadatas = handler.read(ti, try_number, metadata=metadata)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/file_task_handler.py",
>  line 165, in read
>      logs[i] += log
>  TypeError: must be str, not tuple



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-2363) S3 remote logging appending tuple instead of str

2018-04-25 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453104#comment-16453104
 ] 

James Davidheiser commented on AIRFLOW-2363:


FWIW I hit this bug too, and tried installing the version from the pull request 
(hash 0f526bb6c244a974cae5d68d088706ed90d6b916) and it still failed with the 
NoneType error.  I went back to a commit before the breaking change, 
(5cb530b455be54e6b58eae19c8c10ef8f5cf955d) and it worked again.

> S3 remote logging appending tuple instead of str
> 
>
> Key: AIRFLOW-2363
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2363
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: logging
>Reporter: Kyle Hamlin
>Assignee: Kevin Yang
>Priority: Major
> Fix For: 1.10.0
>
>
> A recent merge into master that added support for Elasticsearch logging seems 
> to have broken S3 logging by returning a tuple instead of a string.
> [https://github.com/apache/incubator-airflow/commit/ec38ba9594395de04ec932481212a86fbe9ae107#diff-0442332ecbe42ebbf426911c68d8cd4aR128]
>  
> following errors thrown:
>  
> *Session NoneType error*
>  Traceback (most recent call last):
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/utils/log/s3_task_handler.py",
>  line 171, in s3_write
>      encrypt=configuration.conf.getboolean('core', 'ENCRYPT_S3_LOGS'),
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 274, in load_string
>      encrypt=encrypt)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 313, in load_bytes
>      client = self.get_conn()
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/S3_hook.py", 
> line 34, in get_conn
>      return self.get_client_type('s3')
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 151, in get_client_type
>      session, endpoint_url = self._get_credentials(region_name)
>    File 
> "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/aws_hook.py", 
> line 97, in _get_credentials
>      connection_object = self.get_connection(self.aws_conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 82, in get_connection
>      conn = random.choice(cls.get_connections(conn_id))
>    File "/usr/local/lib/python3.6/site-packages/airflow/hooks/base_hook.py", 
> line 77, in get_connections
>      conns = cls._get_connections_from_db(conn_id)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 72, in wrapper
>      with create_session() as session:
>    File "/usr/local/lib/python3.6/contextlib.py", line 81, in __enter__
>      return next(self.gen)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 41, in create_session
>      session = settings.Session()
>  TypeError: 'NoneType' object is not callable
>  
> *TypeError must be str not tuple*
>  [2018-04-16 18:37:28,200] ERROR in app: Exception on 
> /admin/airflow/get_logs_with_metadata [GET]
>  Traceback (most recent call last):
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1982, in 
> wsgi_app
>      response = self.full_dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1614, in 
> full_dispatch_request
>      rv = self.handle_user_exception(e)
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1517, in 
> handle_user_exception
>      reraise(exc_type, exc_value, tb)
>    File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 33, 
> in reraise
>      raise value
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1612, in 
> full_dispatch_request
>      rv = self.dispatch_request()
>    File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1598, in 
> dispatch_request
>      return self.view_functions[rule.endpoint](**req.view_args)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 69, in inner
>      return self._run_view(f, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_admin/base.py", line 
> 368, in _run_view
>      return fn(self, *args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/flask_login.py", line 755, in 
> decorated_view
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/utils.py", line 
> 269, in wrapper
>      return f(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 
> 74, in wrapper
>      return func(*args, **kwargs)
>    File "/usr/local/lib/python3.6/site-packages/airflow/www/views.py", line 
> 770, in get_logs_with_metadata
>      logs, metadatas = handler.read(ti, try_number, metadata=metadata)
>    File 
> 

[jira] [Created] (AIRFLOW-2317) Support for multiple resource pools

2018-04-12 Thread James Davidheiser (JIRA)
James Davidheiser created AIRFLOW-2317:
--

 Summary: Support for multiple resource pools
 Key: AIRFLOW-2317
 URL: https://issues.apache.org/jira/browse/AIRFLOW-2317
 Project: Apache Airflow
  Issue Type: Wish
  Components: pools
Reporter: James Davidheiser


We are migrating to Airflow from Luigi, where we have the capability to require 
multiple pools, and multiple "units" of a pool, for a given task.  This is very 
useful for a variety of use cases, but the two core examples are:
 * If a task accessing a data store is extremely resource-intensive, we might 
define a pool size of 10 resources, but use 5 of those in a single task to 
denote the extra load.  When smaller tasks are using the same data store, it's 
completely fine to allow more of them to run at once.
If a task is connecting to two different data stores, to transfer data between 
them, we might want to require a resource for each data store, so we can limit 
concurrency on both simultaneously.

 

 

I know there are a lot of other issues related to capacity scheduling, which 
are tracked more broadly in https://issues.apache.org/jira/browse/AIRFLOW-72, 
and using more than one pool slot was suggested in 
https://issues.apache.org/jira/browse/AIRFLOW-1467, so this seems like it could 
be a useful thing to consider as part of larger efforts around managing 
capacity.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-1235) Odd behaviour when all gunicorn workers die

2018-03-09 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393418#comment-16393418
 ] 

James Davidheiser commented on AIRFLOW-1235:


I saw the same thing with a mysql connection error.  I am running Airflow in 
Kubernetes, and and plan to set up a Liveness probe to make sure it kills the 
container if the server stops working.  

 

Some tasks failed on connecting to the database too, but those were marked as 
failed as I would expect.

> Odd behaviour when all gunicorn workers die
> ---
>
> Key: AIRFLOW-1235
> URL: https://issues.apache.org/jira/browse/AIRFLOW-1235
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: webserver
>Affects Versions: 1.8.0
>Reporter: Erik Forsberg
>Assignee: Kengo Seki
>Priority: Major
>
> The webserver has sometimes stopped responding to port 443, and today I found 
> the issue - I had a misconfigured resolv.conf that made it unable to talk to 
> my postgresql. This was the root cause, but the way airflow webserver behaved 
> was a bit odd.
> It seems that when all gunicorn workers failed to start, the gunicorn master 
> shut down. However, the main process (the one that starts gunicorn master) 
> did not shut down, so there was no way of detecting the failed status of 
> webserver from e.g. systemd or init script.
> Full traceback leading to stale webserver process:
> {noformat}
> May 21 09:51:57 airmaster01 airflow[26451]: [2017-05-21 09:51:57 +] 
> [23794] [ERROR] Exception in worker process:
> May 21 09:51:57 airmaster01 airflow[26451]: Traceback (most recent call last):
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 1122, in _do_get
> May 21 09:51:57 airmaster01 airflow[26451]: return self._pool.get(wait, 
> self._timeout)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/util/queue.py",
>  line 145, in get
> May 21 09:51:57 airmaster01 airflow[26451]: raise Empty
> May 21 09:51:57 airmaster01 airflow[26451]: sqlalchemy.util.queue.Empty
> May 21 09:51:57 airmaster01 airflow[26451]: During handling of the above 
> exception, another exception occurred:
> May 21 09:51:57 airmaster01 airflow[26451]: Traceback (most recent call last):
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/engine/base.py",
>  line 2147, in _wrap_pool_connect
> May 21 09:51:57 airmaster01 airflow[26451]: return fn()
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 387, in connect
> May 21 09:51:57 airmaster01 airflow[26451]: return 
> _ConnectionFairy._checkout(self)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 766, in _checkout
> May 21 09:51:57 airmaster01 airflow[26451]: fairy = 
> _ConnectionRecord.checkout(pool)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 516, in checkout
> May 21 09:51:57 airmaster01 airflow[26451]: rec = pool._do_get()
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 1138, in _do_get
> May 21 09:51:57 airmaster01 airflow[26451]: self._dec_overflow()
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/util/langhelpers.py",
>  line 66, in __exit__
> May 21 09:51:57 airmaster01 airflow[26451]: compat.reraise(exc_type, 
> exc_value, exc_tb)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/util/compat.py",
>  line 187, in reraise
> May 21 09:51:57 airmaster01 airflow[26451]: raise value
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 1135, in _do_get
> May 21 09:51:57 airmaster01 airflow[26451]: return self._create_connection()
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 333, in _create_connection
> May 21 09:51:57 airmaster01 airflow[26451]: return _ConnectionRecord(self)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> "/opt/airflow/production/lib/python3.4/site-packages/sqlalchemy/pool.py", 
> line 461, in __init__
> May 21 09:51:57 airmaster01 airflow[26451]: 
> self.__connect(first_connect_check=True)
> May 21 09:51:57 airmaster01 airflow[26451]: File 
> 

[jira] [Commented] (AIRFLOW-2143) Try number displays incorrect values in the web UI

2018-02-22 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373277#comment-16373277
 ] 

James Davidheiser commented on AIRFLOW-2143:


I also confirmed that this affects error emails too - a task that fails on its 
first try starts the email out with `Try 2 out of 1`

> Try number displays incorrect values in the web UI
> --
>
> Key: AIRFLOW-2143
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2143
> Project: Apache Airflow
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: James Davidheiser
>Priority: Minor
> Attachments: adhoc_query.png, task_instance_page.png
>
>
> This was confusing us a lot in our task runs - in the database, a task that 
> ran is marked as 1 try.  However, when we view it in the UI, it shows at 2 
> tries in several places.  These include:
>  * Task Instance Details (ie 
> [https://airflow/task?execution_date=xxx_id=xxx_id=xxx 
> )|https://airflow/task?execution_date=xxx_id=xxx_id=xxx]
>  * Task instance browser (/admin/taskinstance/)
>  * Task Tries graph (/admin/airflow/tries)
> Notably, is is correctly shown as 1 try in the log filenames, on the log 
> viewer page (admin/airflow/log?execution_date=), and some other places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AIRFLOW-2143) Try number displays incorrect values in the web UI

2018-02-22 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373271#comment-16373271
 ] 

James Davidheiser edited comment on AIRFLOW-2143 at 2/22/18 7:11 PM:
-

I am getting the correct log numbers, so 
https://issues.apache.org/jira/browse/AIRFLOW-1873 doesn't seem to be happening 
here.


was (Author: jdavidh):
I am getting the correct log numbers, so #1873 doesn't seem to be happening 
here.

> Try number displays incorrect values in the web UI
> --
>
> Key: AIRFLOW-2143
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2143
> Project: Apache Airflow
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: James Davidheiser
>Priority: Minor
> Attachments: adhoc_query.png, task_instance_page.png
>
>
> This was confusing us a lot in our task runs - in the database, a task that 
> ran is marked as 1 try.  However, when we view it in the UI, it shows at 2 
> tries in several places.  These include:
>  * Task Instance Details (ie 
> [https://airflow/task?execution_date=xxx_id=xxx_id=xxx 
> )|https://airflow/task?execution_date=xxx_id=xxx_id=xxx]
>  * Task instance browser (/admin/taskinstance/)
>  * Task Tries graph (/admin/airflow/tries)
> Notably, is is correctly shown as 1 try in the log filenames, on the log 
> viewer page (admin/airflow/log?execution_date=), and some other places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-2143) Try number displays incorrect values in the web UI

2018-02-22 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373271#comment-16373271
 ] 

James Davidheiser commented on AIRFLOW-2143:


I am getting the correct log numbers, so #1873 doesn't seem to be happening 
here.

> Try number displays incorrect values in the web UI
> --
>
> Key: AIRFLOW-2143
> URL: https://issues.apache.org/jira/browse/AIRFLOW-2143
> Project: Apache Airflow
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: James Davidheiser
>Priority: Minor
> Attachments: adhoc_query.png, task_instance_page.png
>
>
> This was confusing us a lot in our task runs - in the database, a task that 
> ran is marked as 1 try.  However, when we view it in the UI, it shows at 2 
> tries in several places.  These include:
>  * Task Instance Details (ie 
> [https://airflow/task?execution_date=xxx_id=xxx_id=xxx 
> )|https://airflow/task?execution_date=xxx_id=xxx_id=xxx]
>  * Task instance browser (/admin/taskinstance/)
>  * Task Tries graph (/admin/airflow/tries)
> Notably, is is correctly shown as 1 try in the log filenames, on the log 
> viewer page (admin/airflow/log?execution_date=), and some other places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AIRFLOW-2143) Try number displays incorrect values in the web UI

2018-02-22 Thread James Davidheiser (JIRA)
James Davidheiser created AIRFLOW-2143:
--

 Summary: Try number displays incorrect values in the web UI
 Key: AIRFLOW-2143
 URL: https://issues.apache.org/jira/browse/AIRFLOW-2143
 Project: Apache Airflow
  Issue Type: Bug
Affects Versions: 1.9.0
Reporter: James Davidheiser
 Attachments: adhoc_query.png, task_instance_page.png

This was confusing us a lot in our task runs - in the database, a task that ran 
is marked as 1 try.  However, when we view it in the UI, it shows at 2 tries in 
several places.  These include:
 * Task Instance Details (ie 
[https://airflow/task?execution_date=xxx_id=xxx_id=xxx 
)|https://airflow/task?execution_date=xxx_id=xxx_id=xxx]
 * Task instance browser (/admin/taskinstance/)
 * Task Tries graph (/admin/airflow/tries)

Notably, is is correctly shown as 1 try in the log filenames, on the log viewer 
page (admin/airflow/log?execution_date=), and some other places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AIRFLOW-835) SMTP Mail delivery fails with server using CRAM-MD5 auth

2018-02-15 Thread James Davidheiser (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRFLOW-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366192#comment-16366192
 ] 

James Davidheiser commented on AIRFLOW-835:
---

I ran into this in 1.9.0 as well.  I solved it by creating a copy of email.py, 
with the username and password cast as Python 2 strings, and referencing that 
in airflow.cfg's email_backend.

> SMTP Mail delivery fails with server using CRAM-MD5 auth
> 
>
> Key: AIRFLOW-835
> URL: https://issues.apache.org/jira/browse/AIRFLOW-835
> Project: Apache Airflow
>  Issue Type: Bug
>  Components: utils
>Affects Versions: Airflow 1.7.1
> Environment: https://hub.docker.com/_/python/ (debian:jessie + 
> python2.7 in docker)
>Reporter: Joseph Harris
>Priority: Minor
>
> Traceback when sending email from smtp-server configured to offer CRAM-MD5 
> (in all cases, tls included). This occurs because the configuration module 
> returns the password as a futures.types.newstr, instead of a plain str (see 
> below for gory details of why this breaks).
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/site-packages/airflow/models.py", line 1308, 
> in handle_failure
> self.email_alert(error, is_retry=False)
>   File "/usr/local/lib/python2.7/site-packages/airflow/models.py", line 1425, 
> in email_alert
> send_email(task.email, title, body)
>   File "/usr/local/lib/python2.7/site-packages/airflow/utils/email.py", line 
> 43, in send_email
> return backend(to, subject, html_content, files=files, dryrun=dryrun)
>   File "/usr/local/lib/python2.7/site-packages/airflow/utils/email.py", line 
> 79, in send_email_smtp
> send_MIME_email(SMTP_MAIL_FROM, to, msg, dryrun)
>   File "/usr/local/lib/python2.7/site-packages/airflow/utils/email.py", line 
> 95, in send_MIME_email
> s.login(SMTP_USER, SMTP_PASSWORD)
>   File "/usr/local/lib/python2.7/smtplib.py", line 607, in login
> (code, resp) = self.docmd(encode_cram_md5(resp, user, password))
>   File "/usr/local/lib/python2.7/smtplib.py", line 571, in encode_cram_md5
> response = user + " " + hmac.HMAC(password, challenge).hexdigest()
>   File "/usr/local/lib/python2.7/hmac.py", line 75, in __init__
> self.outer.update(key.translate(trans_5C))
>   File "/usr/local/lib/python2.7/site-packages/future/types/newstr.py", line 
> 390, in translate
> if ord(c) in table:
> TypeError: 'in ' requires string as left operand, not int
> SMTP configs:
> [email]
> email_backend = airflow.utils.email.send_email_smtp
> [smtp]
> smtp_host = {a_smtp_server}
> smtp_port = 587
> smtp_starttls = True
> smtp_ssl = False
> smtp_user = {a_username}
> smtp_password = {a_password}
> smtp_mail_from = {a_email_addr}
> *Gory details
> If the server offers CRAM-MD5, smptlib prefers this by default, and will try 
> to use hmac.HMAC to hash the password:
> https://hg.python.org/cpython/file/2.7/Lib/smtplib.py#l602
> https://hg.python.org/cpython/file/2.7/Lib/smtplib.py#l571
> But if the password is a newstr, newstr.translate expects a dict mapping 
> instead of str, and raises an exception.
> https://hg.python.org/cpython/file/2.7/Lib/hmac.py#l75
> All of this occurs after a successful SMTP.ehlo(), so it's probably not crap 
> container networking
> Could be resolved by passing the smtp password as a futures.types.newbytes, 
> as this behaves as expected:
> from future.types import newstr, newbytes
> import hmac
> # Make str / newstr types
> test = 'a_string'
> test_newstr = newstr(test)
> test_newbytes = newbytes(test)
> msg = 'future problems'
> # Test 1 - Try to do a HMAC:
> # fine
> hmac.HMAC(test, msg)
> # fails horribly
> hmac.HMAC(test_newstr, msg)
> # is completely fine
> hmac.HMAC(test_newbytes, msg)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)