A picturesque authentication problem ONLY in Google Chrome
Hello All! All good? I'm running an application in Django 1.4.22 and some days I'm facing some difficulties for my users to access the restricted area of my site. It so happens that I have the site running on the main domain www.dominio.com.br and the restricted area on app.dominio.com.br, both running on https, and so far everything is fine. When the user tries to access the restricted area thru the main domain, he enters the site www.dominio.com.br and clicks on a link that redirects to the restricted area in the subdomain app.dominio.com.br, but he cannot log in, he it puts the username and password and returns to the initial login page, and it does not authenticate. After some tests, I verified that if I go to Chrome's privacy settings, I can delete the cookie for the domain in question, after the removal of domain cookie information and if the user accesses directly via app.domain.com.br, without going through the main site www.dominio.com.br, can log in successfully. Now, if you go through the main site afterwards, you can no longer log in to the restricted area, you must delete the cookie again in the Google Chrome settings. In other browsers this problem does not happen, including in other browsers of the Chromiun family, such as Brave, Vivaldi and even Micro$oft's own. Has anyone gone through a similar problem? Awaiting help from a kind soul Cheers! Rogério Carrasqueira -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULQ4kMb%2BfdrkfZg0KvBUyfha6zbNDU1u1Grg9cFP_gfR9g%40mail.gmail.com.
Re: Starving queue with django-celery and rabbitmq
someone with a charitable soul could help? Rogério Carrasqueira Em qui., 25 de jun. de 2020 às 17:12, Rogerio Carrasqueira escreveu: > > Hello guys! > > I think it’s kind of off-topic, because it’s basically I think more > from RabbitMQ than from Django itself, but I believe someone has > already may have gone through with a similar situation. > > I have a Django application that runs with celery 3.1 and rabbitmq > 3.8.2, It uses rabbitmq to distribute messages among a series of > workers who do several tasks. > > It happens that at on a given moment when a quantity of tasks enters > very large in a given queue, it seems to me that that queue wants to > take all workers for itself. It is as if this task gained an absurd > priority in the system and the celery determined that this queue got > all the attention in the world and the workers worked just for it. > > Faced with this scenario, absurd situations start: which I have a > worker who works in queue1, a worker b who works in queue 2, and I > have configured in the celery routes task x should be allocated in the > queue for worker b to execute. Entering an absurd amount of messages > in queue 1, worker a and b starts to working only at queue1, and the > queue2 is put aside. Only when giving an execution error of worker b, > the task that was in queue 1 is allocated in queue 2, occurring hell > in the system, leaving a mass about the organization of queues, > causing a series of bottlenecks in the system. > > So, I ask friends for a light and put it down as I am configuring my > celery settings: > > BROKER_TRANSPORT_OPTIONS = {} > > CELERY_IMPORTS = ("core.app1.tasks", "core.app2.tasks") > CELERYD_TASK_TIME_LIMIT = 7200 > > CELERY_ACKS_LATE = True > CELERYD_PREFETCH_MULTIPLIER = 1 > #CELERYD_MAX_TASKS_PER_CHILD = 1 > > CELERY_TIMEZONE = 'America/Sao_Paulo' > CELERY_ENABLE_UTC = True > > CELERYBEAT_SCHEDULE = { > 'task1': { > 'task': 'tasks.task1', > 'schedule': crontab(minute='*/30'), > }, > 'task2': { > 'task': 'tasks.task2', > 'schedule': crontab(minute='*/30'), > }, > } > > CELERY_RESULT_BACKEND = > "redis://server-1.klglqr.ng.0001.use2.cache.amazonaws.com:6379/1" > CELERYBEAT_SCHEDULER = 'redbeat.RedBeatScheduler' > CELERYBEAT_MAX_LOOP_INTERVAL = 5 > > CELERY_DEFAULT_QUEUE = 'production-celery' > CELERY_SEND_TASK_ERROR_EMAILS = False > > CELERY_ROUTES = { > > 'tasks.task_1': {'queue': 'queue1'}, > 'tasks.task_2':{'queue': 'queue2}, > > } > > Supervisor settings: > > [program:app_core_production_celeryd_worker_a] > > command=/usr/bin/python manage.py celery worker -n worker_a%%h -l INFO > -c 30 -Q fila1 -O fair --without-heartbeat --without-mingle > --without-gossip --autoreload --settings=core.settings.production > directory=/home/user/production/web_app/app > user=user > numprocs=1 > stdout_logfile=/home/user/production/logs/celeryd_worker_a.log > stderr_logfile=/home/user/production/logs/celeryd_worker_a.log > autostart=true > autorestart=true > startsecs=10 > stdout_logfile_maxbytes=5MB > > [program:app_core_production_celeryd_worker_b] > > command=/usr/bin/python manage.py celery worker -n worker_b%%h -l INFO > -c 30 -Q fila2 -O fair --without-heartbeat --without-mingle > --without-gossip --autoreload --settings=core.settings.production > directory=/home/user/production/web_app/app > user=user > numprocs=1 > stdout_logfile=/home/user/production/logs/celeryd_worker_b.log > stderr_logfile=/home/user/production/logs/celeryd_worker_b.log > autostart=true > autorestart=true > startsecs=10 > stdout_logfile_maxbytes=5MB > > Thanks so much for your help! > > Rogério Carrasqueira -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULQrrKWFivEZT%3DUPame46tMFFKM-E7z6nhXrdhUgE%2B7hfg%40mail.gmail.com.
Starving queue with django-celery and rabbitmq
Hello guys! I think it’s kind of off-topic, because it’s basically I think more from RabbitMQ than from Django itself, but I believe someone has already may have gone through with a similar situation. I have a Django application that runs with celery 3.1 and rabbitmq 3.8.2, It uses rabbitmq to distribute messages among a series of workers who do several tasks. It happens that at on a given moment when a quantity of tasks enters very large in a given queue, it seems to me that that queue wants to take all workers for itself. It is as if this task gained an absurd priority in the system and the celery determined that this queue got all the attention in the world and the workers worked just for it. Faced with this scenario, absurd situations start: which I have a worker who works in queue1, a worker b who works in queue 2, and I have configured in the celery routes task x should be allocated in the queue for worker b to execute. Entering an absurd amount of messages in queue 1, worker a and b starts to working only at queue1, and the queue2 is put aside. Only when giving an execution error of worker b, the task that was in queue 1 is allocated in queue 2, occurring hell in the system, leaving a mass about the organization of queues, causing a series of bottlenecks in the system. So, I ask friends for a light and put it down as I am configuring my celery settings: BROKER_TRANSPORT_OPTIONS = {} CELERY_IMPORTS = ("core.app1.tasks", "core.app2.tasks") CELERYD_TASK_TIME_LIMIT = 7200 CELERY_ACKS_LATE = True CELERYD_PREFETCH_MULTIPLIER = 1 #CELERYD_MAX_TASKS_PER_CHILD = 1 CELERY_TIMEZONE = 'America/Sao_Paulo' CELERY_ENABLE_UTC = True CELERYBEAT_SCHEDULE = { 'task1': { 'task': 'tasks.task1', 'schedule': crontab(minute='*/30'), }, 'task2': { 'task': 'tasks.task2', 'schedule': crontab(minute='*/30'), }, } CELERY_RESULT_BACKEND = "redis://server-1.klglqr.ng.0001.use2.cache.amazonaws.com:6379/1" CELERYBEAT_SCHEDULER = 'redbeat.RedBeatScheduler' CELERYBEAT_MAX_LOOP_INTERVAL = 5 CELERY_DEFAULT_QUEUE = 'production-celery' CELERY_SEND_TASK_ERROR_EMAILS = False CELERY_ROUTES = { 'tasks.task_1': {'queue': 'queue1'}, 'tasks.task_2':{'queue': 'queue2}, } Supervisor settings: [program:app_core_production_celeryd_worker_a] command=/usr/bin/python manage.py celery worker -n worker_a%%h -l INFO -c 30 -Q fila1 -O fair --without-heartbeat --without-mingle --without-gossip --autoreload --settings=core.settings.production directory=/home/user/production/web_app/app user=user numprocs=1 stdout_logfile=/home/user/production/logs/celeryd_worker_a.log stderr_logfile=/home/user/production/logs/celeryd_worker_a.log autostart=true autorestart=true startsecs=10 stdout_logfile_maxbytes=5MB [program:app_core_production_celeryd_worker_b] command=/usr/bin/python manage.py celery worker -n worker_b%%h -l INFO -c 30 -Q fila2 -O fair --without-heartbeat --without-mingle --without-gossip --autoreload --settings=core.settings.production directory=/home/user/production/web_app/app user=user numprocs=1 stdout_logfile=/home/user/production/logs/celeryd_worker_b.log stderr_logfile=/home/user/production/logs/celeryd_worker_b.log autostart=true autorestart=true startsecs=10 stdout_logfile_maxbytes=5MB Thanks so much for your help! Rogério Carrasqueira -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULRNdAxxwC4LNZy5cCMsd7o%2BGhR9WjgzA9CZkG6Ut9SMJw%40mail.gmail.com.
Re: Celery with SQS
Hello Jason! Thanks for your reply, it is not so easy to move from 1.4.22 to 2.2, it a big leap. I can upgrade the celery to 4.3, but I don't know if it will work well with Django 1.4.22. Do you have a clue about what could be this issue? Thanks Rogério Carrasqueira Em sáb, 26 de out de 2019 às 16:40, Jason escreveu: > > both those versions are severely out of date, so I would first suggest to > update. current version of celery is 4.3, and django is 2.2. If you have > reasons to move from rabbit to SQS, why not update everything else while > you're at it? > > > -- > You received this message because you are subscribed to the Google Groups > "Django users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to django-users+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/django-users/419fedc2-a953-4a2f-a24e-560fbf3bd9ad%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULTi0omhsEprv-Eo6VVDO5f-S7VDThaVh7LxGcbH7zGqLQ%40mail.gmail.com.
Celery with SQS
Hello All! I'm using Celery 3.1 with Django 1.4.22 and I want to move from RabbitMQ to SQS. I've setup the celery and all scheduled tasks are running as well. But when I have to use the method add_consumer, the jobs are not consumed by the worker and no response from then. See what I'm doing: >>> from celery import current_app as app >>> app.control.add_consumer('production-email-campaign-105258', reply=True, >>> destination=['celery@worker_email.w13.ip-10-0-1-82']) [] >>> app.control.add_consumer('production-email-campaign-105258', reply=True, >>> destination=['celery@worker_email.ip-10-0-1-82']) [] >>> app.control.add_consumer('production-email-campaign-105258', reply=True, >>> destination=['celery_worker_email-ip-10-0-1-82-celery-pidbox']) [] >>> app.control.add_consumer('production-email-campaign-105258', reply=True) [] >>> app.control.add_consumer('production-email-campaign-105258', reply=True, >>> destination=['celery_worker_email-ip-10-0-1-82']) [] >>> I did not get any response from the celery and jobs to remain to wait for consumer. I would like to know if there any clue to make celery working good with SQS and dynamic queues, as example 'production-email-campaign-XXX' that is created on the fly. Thanks Rogério Carrasqueira -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULS%3Deirx4msnCiW2mAcZa9shdCsVdYuZMgNTSEiyH3MQjQ%40mail.gmail.com.
[OFF] Job Opportunity at São Paulo - SP - Brazil
Hello Everybody! We have a job opportunity to work with us at Direct Flow/Performance, to join us please send your resumée to *va...@directflow.com.br or va...@directperformance.com.br* If you are hard coder and don't disguise it and you are a member from many foruns about frameworks and tech talks, we want you!! We are looking for a experencied developer professional to work with web and web standards and API integrations with other systems Our chalenge is to develop simple but powerful and smart solutions that will be used for project that will solve problems with maximum efficiency Job Profile: - Graduating on Computing Engineering, Computing Science, Mathematics Computing Applied or other equivalent grading. - Experience with web technologies using frameworks like Django, over Python Language - Updated about the news of your favorite programming language e about Google Platforms - Its is essencial the ability to solve problem with analytic capacity Differentials: - Skills about the use of the Google API's, AdServers Platforms - Skills about the use of Web Analytics Systems - Skills about developing using Django Framework Our Benefits: - Carreer Plan - Salary compatible with the market - Health Care - Education continuing program Rogério Carrasqueira --- e-mail: rogerio.carrasque...@gmail.com skype: rgcarrasqueira MSN: rcarrasque...@hotmail.com ICQ: 50525616 Tel.: (11) 7805-0074 -- You received this message because you are subscribed to the Google Groups "Django users" group. To post to this group, send email to django-users@googlegroups.com. To unsubscribe from this group, send email to django-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-users?hl=en.