Rob G had similar issues running the test suite, but had an extra fail as
well! Not sure what is going on but I think some tests need looking at
possibly!

Ben

On Wed, 24 Apr 2019 at 05:13, Vincent Meijer <[email protected]>
wrote:

> Nice!
> Sounds like that is a regular assertion error and independent of the
> environment, or does the test succeed locally?
>
> On Tue, Apr 23, 2019 at 6:46 PM Ben O'Steen <[email protected]> wrote:
>
>> Good news, Bad news I'm afraid:
>>
>> Good news? the environment flag seems to have done the trick and the
>> Elasticsearch container doesn't quit after running its checks.
>>
>> Bad news? There is a failing test:
>>
>> FAIL: Test bulk deleting of documents in Elasticsearch
>> ----------------------------------------------------------------------
>> Traceback (most recent call last):
>> File "/web_root/arches/tests/search/search_tests.py", line 66, in
>> test_bulk_delete
>> self.assertEqual(se.count(index='test'), 10)
>> AssertionError: 11 != 10
>>
>>
>> https://cloud.docker.com/repository/registry-1.docker.io/benosteen/arches/builds/88c8a09d-9fa6-44f5-a86a-fc960ba2f327
>>
>>
>>
>> On Tue, 23 Apr 2019 at 08:23, Ben O'Steen <[email protected]> wrote:
>>
>>> That is an odd error, certainly works locally but not when docker hub
>>> runs the test setup.
>>>
>>> One thing it could be is that if docker hub are using default EC2
>>> instances, they have the vm.max_map_count set too low for Elasticsearch. It
>>> is high enough that Elasticsearch will boot but not enough to stop it
>>> erroring out after a very short period. I've been using 'sudo sysctl -w
>>> vm.max_map_count=262144' on the host to avoid this which is why it is not
>>> erroring for me during Elasticsearch's bootstrap checks.
>>>
>>> We should be able to disable the checks (turn them from hard errors to
>>> soft warnings in the logs) by passing the ES container
>>> "discovery.type=single-node" as an environment variable. I'll try this on
>>> my docker hub account and see if that can fix it.
>>>
>>> Ben
>>>
>>> On Tue, 23 Apr 2019 at 03:12, Vincent Meijer <[email protected]>
>>> wrote:
>>>
>>>> It seems there are problems with our automated docker builds, which is
>>>> presumably the cause of the 4.4.1 tag not existing.
>>>>
>>>> Latest build of the master branch failed: the Elasticsearch endpoint
>>>> cannot be reached during the unit tests.
>>>> Failed to establish a new connection: [Errno -2] Name or service not
>>>> known)
>>>>
>>>> https://cloud.docker.com/u/archesproject/repository/registry-1.docker.io/archesproject/arches/builds/ca27a312-4d02-4e1b-b65c-2f34cb22ef79
>>>>
>>>>
>>>> Not sure why this is, as the Elasticsearch container seems to be up and
>>>> running during the test.
>>>>
>>>> Full command: run_tests
>>>> Command: run_tests
>>>> Testing if database server is up...
>>>> Database server is up
>>>> Testing if Elasticsearch is up...
>>>> Elasticsearch is up
>>>>
>>>> However, the 4.4.1 build does not seem be have been triggered at all,
>>>> and I'm at a loss as to why this hasn't happened...
>>>> Possibly someone with more permissions than me could check the webhook
>>>> settings in the Github repo? :)
>>>>
>>>> Vincent
>>>>
>>>> On Fri, Apr 19, 2019 at 12:40 AM Ben O'Steen <[email protected]> wrote:
>>>>
>>>>>
>>>>> - As for Docker 4.4.1 not being put onto Docker Hub, I'm not sure why
>>>>> that is. I've been making my own images of the codebase in lieu of that.
>>>>>
>>>>> - I don't get the System settings errors you mention, though I am
>>>>> using Arches based on a much more recent codebase (I created my Arches 
>>>>> base
>>>>> image last week or so from the master branch).
>>>>>
>>>>> Ben
>>>>>
>>>>> On Tue, 16 Apr 2019 at 14:39, Matthias Bussonnier <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Thanks Ben,
>>>>>>
>>>>>> That was useful, I guess I got confused between DEV mode 8000 and
>>>>>> PROD port 80, I got it to work now.
>>>>>>
>>>>>> 2 followup questions; now that I can see the website and login:
>>>>>>   - I see on dockerhub that 4.4.1 has not been published; is that
>>>>>> expected ?
>>>>>>   - once logged-in I can't seem to access any of the 3 "System
>>>>>> settings" pages, they give me a 500,  Is that expected ? (I haven't 
>>>>>> managed
>>>>>> to get both my deployment working and django to spit out a traceback,
>>>>>> otherwise I would give up more info)
>>>>>>
>>>>>> I'm happy to start this as another thread if this is better suited.
>>>>>> --
>>>>>> Matthias
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 16 Apr 2019 at 10:12, Ben O'Steen <[email protected]> wrote:
>>>>>>
>>>>>>> I should also make it clear that all of what I mentioned above is
>>>>>>> the default behaviour and doesn't need you to do anything more to make 
>>>>>>> it
>>>>>>> work.
>>>>>>>
>>>>>> --
>>>>> -- To post, send email to [email protected]. To
>>>>> unsubscribe, send email to [email protected].
>>>>> For more information, visit
>>>>> https://groups.google.com/d/forum/archesproject?hl=en
>>>>> ---
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Arches Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>> --
>>>> -- To post, send email to [email protected]. To
>>>> unsubscribe, send email to [email protected].
>>>> For more information, visit
>>>> https://groups.google.com/d/forum/archesproject?hl=en
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Arches Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>

-- 
-- To post, send email to [email protected]. To unsubscribe, send 
email to [email protected]. For more information, 
visit https://groups.google.com/d/forum/archesproject?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Arches Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to