Re: Please Test Staging Phabricator

2016-12-07 Thread Tim Flink
On Wed, 07 Dec 2016 08:24:46 -0800
Adam Williamson  wrote:



> For a start Ipsilon tells me it's some entirely foreign third-party
> domain - 'monikra.me' - that wants access to all my personal
> information, which is a bit unsettling. I went ahead and let it have
> it (For Science!) and got:

FWIW, monikra.me is a service that puiterwijk made for federated login

> Unhandled Exception ("HTTPFutureHTTPResponseStatus")
> [HTTP/400] 
> 
> 
>   
>   
>   Error 400 (Bad Request)!!1
>   

Re: Please Test Staging Phabricator

2016-12-07 Thread Adam Williamson
On Wed, 2016-12-07 at 08:52 -0700, Tim Flink wrote:
> As support for the Persona system has winded down, we finally have a
> new method for logging into our phabricator instance (that should
> also get rid of all those 500s on login).
> 
> My goal has been to set up the migration so that there's no account
> fiddling needed to use the new auth system. Things are working in my
> testing but I'd like to see more people test out the new auth method
> before deploying all of this to production.
> 
> If you have the time, please try logging in to
> 
> https://phab.qa.stg.fedoraproject.org/
> 
> I've seen some errors from ipsilon about "Transaction expired, or
> cookies not available", click on "Try to login again" and everything
> should work.
> 
> If you run into problems, please let me know. There are a few accounts
> which will need tweaking by hand (phabricator username doesn't match
> FAS username so my script didn't work) but I wanted to make sure this
> was working for more than just me before finishing things up.
> 
> Tim

For a start Ipsilon tells me it's some entirely foreign third-party
domain - 'monikra.me' - that wants access to all my personal
information, which is a bit unsettling. I went ahead and let it have it
(For Science!) and got:

Unhandled Exception ("HTTPFutureHTTPResponseStatus")
[HTTP/400] 


  
  
  Error 400 (Bad Request)!!1
  

Please Test Staging Phabricator

2016-12-07 Thread Tim Flink
As support for the Persona system has winded down, we finally have a
new method for logging into our phabricator instance (that should
also get rid of all those 500s on login).

My goal has been to set up the migration so that there's no account
fiddling needed to use the new auth system. Things are working in my
testing but I'd like to see more people test out the new auth method
before deploying all of this to production.

If you have the time, please try logging in to

https://phab.qa.stg.fedoraproject.org/

I've seen some errors from ipsilon about "Transaction expired, or
cookies not available", click on "Try to login again" and everything
should work.

If you run into problems, please let me know. There are a few accounts
which will need tweaking by hand (phabricator username doesn't match
FAS username so my script didn't work) but I wanted to make sure this
was working for more than just me before finishing things up.

Tim


pgp0GkNhMemdN.pgp
Description: OpenPGP digital signature
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2016-12-07 Thread Josef Skladanka
On Mon, Dec 5, 2016 at 4:25 PM, Tim Flink  wrote:

> Is there a way we could export the results as a json file or something
> similar? If there is (or if it could be added without too much
> trouble), we would have multiple options:
>

Sure, adding some kind of export should be doable


>
> 1. Dump the contents of the current db, do a partial offline migration
>and finish it during the upgrade outage by export/importing the
>newest data, deleting the production db and importing the offline
>upgraded db. If that still takes too long, create a second postgres
>db containing the offline upgrade, switchover during the outage and
>import the new results since the db was copied.
>
>
I slept just two hours, so this is a bit entangled for me. So - my initial
idea was, that we
 - dump the database
 - delete most of the results
 - do migration on the small data set

In paralel (or later on), we would
 - create a second database (let's call it 'archive')
 - import the un-migrated dump
 - remove data that is in the production db
 - run the lenghty migration

This way, we have minimal downtime, and the data are available in the
'archive' db,

With the archive db, we could either
1) dump the data and then import it to the prod db (again no down-time)
2) just spawn another resultsdb (archives.resultsdb?) instance, that would
operate on top of the archives

I'd rather do the second, since it also has the benefit of being able to
offload old data
to the 'archive' database (which would/could be 'slow by definition'),
while keeping the 'active' dataset
small enough, that it could all be in memory for fast queries,.

What do you think? I guess we wanted to do something pretty similar, I just
got lost a bit in what you wrote :)



> 2. If the import/export process is fast enough, might be able to do
>instead of the inplace migration
>

My gut feeling is that it would be pretty slow, but I have no relevant
experience.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org