from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.
On 12/18/14, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi Gerald
We tested this when I was in Sierra Leone and we were finding serious
problems
Just a brief note to capture some points of discussion between Jim Grace
and myself last week lest they are forgotten forever.
Three relatively minor enhancements to our model which would allow dhis2 to
operate as a reasonable terminology service:
1. Extend the hard wired single code attribute
For a bundle of reasons it would be much better for the lastUpdated field
to be updated as a default value on the database rather than explicitly
through the api. I would rather discourage its use through the api rather
than enforce it.
On 22 December 2014 at 11:16, Morten Olav Hansen
,
-carl
On Dec 22, 2014 5:23 AM, Bob Jolliffe bobjolli...@gmail.com wrote:
Just a brief note to capture some points of discussion between Jim Grace
and myself last week lest they are forgotten forever.
Three relatively minor enhancements to our model which would allow dhis2
to operate
for a third field containing a UID for every code in the code table.
We also need to find a way to make this backwards compatible with existing
systems. ;)
Cheers,
Jim
On Mon, Dec 22, 2014 at 9:55 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
No. But the first point is to have the data model
I saw that thanks. Will use in the new year.
On 25 Dec 2014 10:51, Lars Helge Øverland larshe...@gmail.com wrote:
Hi Bob,
we now have support for the DE_GROUP-uid syntax in analytics in 2.17 and
trunk. Example:
are not prohibited from letting that class inherit properties
and logic from another base class later.
On Dec 7, 2013 11:43 PM, Bob Jolliffe bobjolli...@gmail.com wrote:
ah ok. Then I suppose we are sitting with an opportunity. As we come up
with a better solution to patient attributes we keep in mind
I guess jason should be on this list?
On 2 September 2014 11:33, Lars Helge Øverland larshe...@gmail.com wrote:
Hi,
just a note, we are using trunk for the pepfar project for some time and
are having a series of presentations in the coming weeks. So please test
well before committing to
That sounds good. I think we need an over arching tr catch block to ensure
that whatever else we do right or wrong we somehow at least return xml,json
or what have you.
On 3 Oct 2014 14:32, Morten Olav Hansen morte...@gmail.com wrote:
On Thu, Oct 2, 2014 at 8:51 PM, Bob Jolliffe bobjolli
speed of import over security of the data, but it might need to be
something we should look into.
Drifting here Bob, but maybe I am no longer talking at cross-purposes? :)
Regards,
Jason
On Sat, Oct 4, 2014 at 12:49 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Jason maybe we talk
Public bug reported:
Maybe not quite a bug so much as a missing/overlooked feature, but there
is no widget to edit the code of a category. The code field exists in
the database.
** Affects: dhis2
Importance: Undecided
Status: New
--
You received this bug notification because you
larshe...@gmail.com
wrote:
What version? It is there from 2.17.
Lars
On Wed, Feb 4, 2015 at 5:51 PM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Public bug reported:
Maybe not quite a bug so much as a missing/overlooked feature, but there
is no widget to edit the code of a category
Jason is right. You should only need the hibernate sequence number if you
are dealing with the database through sql. And if you are, then the only
safe way to use it is to get the sequence during the execution of the
statement.
Getting the next sequence number and storing it to some variable
Thats what I would have thought. Surely it is the sms gateway which
translates the sms message and then just pushes using the standard
datavalues api?
On 16 January 2015 at 11:23, Morten Olav Hansen morte...@gmail.com wrote:
Hi Olav
To my knowledge there is no such API. The only controller we
to this complex
environment. If they need to insert records directly into tables this
gives
them the opportunity to do the work they're used to (coming from
version
1.4)...
Regards,
Greg
On Fri, Jan 16, 2015 at 11:17 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Jason is right
('hibernate_sequence')),
etc
Probably there is a good reason. I guess you could always just modify your
table structure like the above and then you can happily forget all about
the existence of hibernate_sequence :-)
Bob
On 16 January 2015 at 11:56, Bob Jolliffe bobjolli...@gmail.com wrote:
Agree Calle. I too
I think Jason's suggestion makes good sense. It doesn't make intuitive
sense that people should interpret Number to mean real number as distinct
to integer. Though maybe real number is slightly too mathematical a term
for some users. I wonder is floating point less technical or moreso?
Either
at 18:26, Morten Olav Hansen morte...@gmail.com wrote:
18k is not really that much.. is it giving you any kind of errors? given
the SL demo db... we can easily export much more objects that that..
--
Morten
On Thu, Jan 22, 2015 at 12:42 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Can
Can you try and get a jstack dump while it is busy chomping cpu. That will
probably indicate where in the code it is spinning. There was a problem
earlier with a weakness in the paging code but should have been fixed by
that build.
On 21 January 2015 at 15:00, Jason Pickering
S %CPU %MEMTIME+ COMMAND
28091 dhis 20 0 11.8g 2.1g 13m S 307 13.2 29:39.99 java
Regards,
Jason
On Wed, Jan 21, 2015 at 7:42 PM, Bob Jolliffe bobjolli...@gmail.com
wrote:
If it is running for 15 minutes then almost certainly it is stuck in some
kind of loop. Find the pid
Not really (I don't think). That commit was related to pageSize to prevent
the while loop immediately below entering into an infinite cycle.
The current behaviour of the pager in the xample you give seems
semantically correct to me. The pager has returned a page. And only 1
amongst 1. There
...@thedutchies.com
mar...@ifi.uio.no
+47 970 36 752
On 16 Jan 2015, at 13:51, Bob Jolliffe bobjolli...@gmail.com wrote:
Not really (I don't think). That commit was related to pageSize to
prevent the while loop immediately below entering into an infinite cycle.
The current behaviour of the pager
%MEMTIME+ COMMAND
28091 dhis 20 0 12.0g 3.6g 5916 S 796 23.0 328:48.52 java
Command used was
curl --verbose
http:///dhis/api/dataElements.json?links=falsepaging=false; -u
:
Regards,
Jason
On Wed, Jan 21, 2015 at 11:05 PM, Bob Jolliffe bobjolli...@gmail.com
Hi Knut.
I agree it probably does make sense to standardize this kind of object
metadata (storedby, updated, created) across all objects.
Do you get anything in audit log? That might be more reliable way of
tracking changes as storedby can be changed over time.
Bob
On 28 January 2015 at
? And from there on make
sure to use getAccess( T object, User user ) ?
I'd bet a pint of best that this would reduce the time for that call to
list dataelements by 90%.
Bob
On 22 January 2015 at 10:58, Bob Jolliffe bobjolli...@gmail.com wrote:
Sorry Jason. A quick look through isnt shedding light
Hi Calle
I never thought of using categories like that, but it looks to me like it
would work. Category=DayOfTheMonth,
CategoryOptions=Day1,Day2,...,Day31. The sum of the categoryoptions add
up to the total for the month which I think is consistent with rational
category design.
Obviously
SDMX 2.1 has the concept of a REPORTING_YEAR_START_DAY. See
http://sdmx.org/wp-content/uploads/2011/04/SDMX_2-1_SECTION_6_TechnicalNotes.pdf.
Thus you can have PERIOD=2010 REPORTING_YEAR_START_DAY=--07-01
which would indicate the year running from 01-July-2010 to 31 Aug
2011.
Not sure if this
Of course shp2gml tools already exist - they could be described as a
bit geeky using the command line. Though there is need for some of
those geeky options for dealing with projection systems and the like.
I think someone with some python know how could write a neat little
QGis plugin to export
it already exists.
You are right that some shapefiles are really heavy (thus the need for
mapshaper).
On Tue, Mar 31, 2015 at 11:50 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
That is a handy enough library for dealing with projection all right.
But doesn't really help in reading
it already exists.
You are right that some shapefiles are really heavy (thus the need for
mapshaper).
On Tue, Mar 31, 2015 at 11:50 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
That is a handy enough library for dealing with projection all right.
But doesn't really help in reading
Lars is right. Any xml processor is going to choke on those
characters and there is nothing one could do in DHIS2 code to prevent
it. Basically its just not valid xml.
On 31 March 2015 at 12:05, Lars Helge Øverland larshe...@gmail.com wrote:
Hi Calle,
this is not really a DHIS 2 bug. The
, but it does include the other crucial step of generalisation
using http://www.mapshaper.org/.
On Tue, Mar 31, 2015 at 9:53 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Of course shp2gml tools already exist - they could be described as a
bit geeky using the command line. Though there is need for some
Hi Gerald
I think only you have the answers to 2 and 4.
4 could of course be complicated or simple, depending on what exactly you
hope to achieve with the Data integration Project. One (of the many
possible) workflows I would suggest is nominating your HMIS system as the
authoritative source of
I guess you will also have to backport to 2.17.
On 23 January 2015 at 14:08, Morten Olav Hansen morte...@gmail.com wrote:
Yes, we have already fixed this a few places. I'm making a note of it, and
will fix it later.
--
Morten
On Fri, Jan 23, 2015 at 8:58 PM, Lars Helge Øverland
at 1:38 PM, Bob Jolliffe bobjolli...@gmail.com
wrote:
Hi
Here's a problem. Apologies, its a long mail, but its a serious business
and needs to be untangled.
Two or more systems have matching dataelements, categorycombos,
categories and categoryoptions. They could be matched on uid, name
...@gmail.com wrote:
https://www.dhis2.org/doc/snapshot/en/implementer/html/ch08s03.html#d5e623
will show you how to make resources available.
Use with caution.
Regards,
Jason
On Thu, Apr 23, 2015 at 11:03 AM, Bob Jolliffe bobjolli...@gmail.com
wrote:
You could also facade it at the reverse
There is another possibility of course that these URLs may have been
cached in user's browsers prior to the system being reinstalled. In
fact I suspect that is most probably the case.
On 22 April 2015 at 09:37, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi
I've been passing nginx logs
Hi Gerald
Looks like 2 different things are going on here. First as Alex says,
check that you don't already have something listening on 8080
(netstat -ntl is your friend). You are getting that error earlier in
the catalina log - port is already taken.
Then finally it seems to start but it
Hi Ese
I wonder did you end up with bits of the old and the new after
upgrade. Make sure you (i) completely clear out webapps directory
before deploying 2.18 war file and (ii) make sure you clear cache
completely in browser and also proxy cache if you are using proxy.
Bob
On 12 April 2015 at
this programatically
so that we can direct logging to DHIS2_HOME? If so, then let's do it
this way instead.
On 8 June 2015 at 13:42, Bob Jolliffe bobjolli...@gmail.com wrote:
Maybe a better approach would be to upgrade log4j 1.2.7 to log4j 2 -
which allows for example to refer to envronment variables
Lars this seems like a good development to me. I presume we will
still be able to override with an external log4j config. Sometimes
its more convenient to rotate by time (eg month) than by size.
I presume all the exception dumps (ie not real logs) are going to
end up in catalina which is fine.
Yes that is what I just tested. It loads the log4 configuration file.
But this doesn't prevent the java code from running so it configures
twice.
On 9 June 2015 at 10:38, Lars Helge Øverland larshe...@gmail.com wrote:
On Tue, Jun 9, 2015 at 11:19 AM, Bob Jolliffe bobjolli...@gmail.com wrote
I just tried - you can't override those settings with an external
file. They will always kick in after the configuration is loaded.
Which is very inflexible. For example even the logging level is
fixed.
I think its worth reconsidering.
On 9 June 2015 at 09:00, Bob Jolliffe bobjolli
Hi Lars
On 9 June 2015 at 07:37, Lars Helge Øverland larshe...@gmail.com wrote:
Hi Bob,
thanks for the feedback.
Yes the main reason for having a java based default config is to be able to
set the log file path to the dhis home directory. Yes the env var support in
log4j 2 is cool but it
Maybe a better approach would be to upgrade log4j 1.2.7 to log4j 2 -
which allows for example to refer to envronment variables (like
DHIS2_HOME) in the configuration.
On 8 June 2015 at 13:37, Bob Jolliffe bobjolli...@gmail.com wrote:
Ah!! I have just noticed that this new logging regime
Ah!! I have just noticed that this new logging regime is actually
hardcoded in java. That strikes me as a really bad idea. Can we not
achieve exactly the same effect as you get in code by providing a
default log4j.xml file bundled in the war which could be overriden by
an external file.
Whereas
FWIW I discovered the neatest way to resolve these Directory not
empty .. not deleting conflicts seems to be to run bzr resolve like:
bzr resolve --action=take-other dhis-services/dhis-service-tracker
This (pretty non-intuitive) command issues a bzr rm on the directory
and then marks the
Hi Moemedi
Nobody has seriously looked at mydatamart now for over two years so it
is hard to tell. It was basically superceded by the introduction of
web pivot tables and the new web api and analytics.
Is there any reason why you are still stuck on 2.14?
Bob
On 24 June 2015 at 16:55, Moemedi
of a few per process it should be handled by the JVM.
But I might be wrong.
On Thu, Jun 18, 2015 at 8:46 PM, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi Lars
The problem is the dataValuSetService requires an an inputstream to
feed off. There are only 2 ways to provide an inputstream
macro code here:
https://webapps.stackexchange.com/questions/79521/can-i-use-excel-to-get-json-from-the-dhis-api/79522#79522
See the comment at the end, seems it is possible to parse the JSON in a
macro also.
Thank you,
Markus
23. jun. 2015 kl. 12.13 skrev Bob Jolliffe bobjolli...@gmail.com
You can't use excel to get json data from dhis2 web api. At least not directly.
You could possibly write an excel macro to get the data through the
web api (I am really not sure - it is far too many years since i wrote
an excel macro), but you are still left with the problem of consuming
what
.
On 10 June 2015 at 15:25, Morten Olav Hansen morte...@gmail.com wrote:
What about DateUtils? We already have DateUtils.parseDate which we use a
lod, parseAdxDate?
--
Morten
On Wed, Jun 10, 2015 at 9:17 PM, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi
I am just looking at the task
confirmed that everything is working properly but my system is below
requirement for it to work properly.
On 6/16/15, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi Gerald
I think you neglected to mention you are using dhis2-tools to create
this instance. When you run 'dhis2-instance-create myinstance
I have just noticed that Morten has already implemented an
ImportSummaries class. Perfect. I can use that as a returned object.
Morten, what are you using this class for currently? Do we have other
instances where we are importing multiple datavaluesets?
On 12 June 2015 at 13:40, Bob Jolliffe
Hi Gerald
I think you neglected to mention you are using dhis2-tools to create
this instance. When you run 'dhis2-instance-create myinstance' a few
things happen:
1. a new user called myinstance is created with a home dir of
/var/lib/dhis2/myinstance
2. a new database role called myinstance is
Hi all
You will find here (http://ihe.net/Public_Comment/#qrph) the ADX
profile published for public comment.
This is an aggregate data standard which has been based substantially
on dxf2. There is a lot of detailed text (including - horror -
related to SDMX) which is not so interesting, but
As far as I recall you can't. The webapp is hardcoded to a url in the
container.
If you are planning to serve up dhis2 at a different url I can only
guess you are planning to share the web application with others. In
which case dhis live is probably not your best option anyway and you
are best
Agree with both Jason and Knut.
There are certain parameters within dhis2 live that are hard-coded and
thus restrict its ability to scale. It was designed initially to run
as single-user desktop application where it was important to hide the
messy details of configuring a web based application
Hi Calle
Getting stack overflow messages is almost always a symptom of infinite (or
too much) recursion, Increasing the stack size just delays the symptoms
presenting themselves (and makes the problem bigger). This is probably
also what you have found with your experiments. Also it starts to
caused us to remove the exclusions.
On 1 July 2015 at 10:16, Morten Olav Hansen morte...@gmail.com wrote:
On Wed, Jul 1, 2015 at 4:12 PM, Bob Jolliffe bobjolli...@gmail.com wrote:
Having said that, this stack overflow business might carry some clue.
@Morten you mention elsewhere in this thread
This means your nginx frontend is fine but there is some problem with
the toncat backend. Normally we see 502 (see
https://www.dhis2.org/doc/snapshot/en/implementer/html/ch20s05.html).
Check the logs of your dhis instance for what is bothering it. Maybe
you are out of memory.
On 30 June 2015 at
A few quick thoughts on document storage:
1. once you have a lot of documents it becomes an interesting problem
finding them/searching them. People have been doing this for a LONG
time and there are a couple of well understood metadata standards for
storing metadata about documents (for example
to that URL directly in a web browser it pulls the data
element in question right up.
Timothy Harding
RPCV Vanuatu
Skype: hardi...@gmail.com
+1 (541) 632-6623
On Fri, Aug 7, 2015 at 7:29 AM, Bob Jolliffe bobjolli...@gmail.com wrote:
Hi Tim. Are you using PATCH for this update operation
Hi Tim. Are you using PATCH for this update operation? This works
well if you just need to update a particular set of fields, not the
entire dataelment (for which you would use POST and include mandatory
fields like name and shortname). For example:
curl -X PATCH -d {\code\: \VCCT_6\} -H
I know that openmrs makes use of liquibase for managing database diffs
: http://www.liquibase.org/
It might be worth considering whether this approach would be useful
for us in place of our hibernate ddl + extra sql upgrade script .
Student project?
On 23 July 2015 at 20:36, Morten Olav Hansen
Hi Mritunjay
As Lars says, if the categoryoptioncombos were not exported then that
would indeed have caused a problem. And/or if you were using internal
primary keys in your reports.
But is there a particular reason you need to rely on the import/export
functionality to do this? Did you try
Hi Gerald
Did you check that there is not data missing for Oct, Nov, Dec 2014
for the same districts? I think we had a suspicion some time back
that there was maybe a problem with the client PC date settings. So
they thought they were entering for Oct 2014 but in fact it was Oct
2015. Hence it
For similar (tangentially related) configuration, I have used json.
See below. Either yaml or json (or even dreaded xml), I agree with
Jason that a bit of structure can be beneficial. Yaml syntax is maybe
the most forgiving for user editing by hand.
{
"dhis2Systems" : {
"hmis" : {
"type" :
I am less convinced that YAML would be too much. There are of course
different audiences and Lars points us to the lowest common
denominator ("how do i open this in word?") who is a troublesome
customer alright, but maybe not the most important one.
But for db configuration this is fine.
The
Hi Lars
Renaming sounds good. Just a minor clarification request/suggestion below ...
On 25 August 2015 at 10:31, Lars Helge Øverland wrote:
> Hi Tran, Abyot,
>
> I propose that we rename:
>
> ProgramInstance.dateOfIncident to incidentDate;
+1
>
> and
>
>
Hi David
The log file does seem to indicate that the startup was successful.
It does look like an orderly shutdown rather than any sign of
out-of-memory exceptions. Which happens about 1 minute after the
successful startup. Did you shut it down yourself or it just
collapsed without you doing
Hi Channara
The man page for curl indicates that the format of the command is:
curl
Try moving your url to the end of the line, after the options. Also you
might want to add "http://localhost...;
On 3 September 2015 at 11:01, channara rin wrote:
> Hi DHIS2 friends,
thub.com/bmatzelle/gow
>>>>
>>>> On Thu, Sep 3, 2015 at 12:08 PM, Morten Olav Hansen <morte...@gmail.com
>>>> > wrote:
>>>>
>>>>> I have also seen weird issues on window versions of cURL where -u
>>>>>
Gerald you would need to do this with an sql delete command. Before
you do you should be 100% sure that you do really want to delete the
data - maybe make a backup first.
You need to find the primary key (organisationunitid) of the orgunit
you want to delete. Say it is 5677 Then:
DELETE FROM
I am not sure if this is an issue which has been fixed in later
versions, but we recently chanced upon some odd behaviour in 2.17.
On the production system there are a number of views defined (through
the sql view interface). A couple of those views depend on other
views. When resource tables
> is alphabetically ordered after it).
>
> regards,
>
> Lars
>
>
>
> On Mon, Sep 28, 2015 at 1:09 PM, Knut Staring <knu...@gmail.com> wrote:
>>
>> I have a similar problem in 2.19 where backups are not getting generated
>> because of sqlviews.
Public bug reported:
When you create a new orgunit and press the "Add" button multiple times
before the page reloads you end up with multiple orgunits with the same
name and different uids. This causes a surprisingly common problem on
high latency connections. The user presses "add" and thinks
This looks nice. Quick question: is it realistic to think in terms of
using this api for doing incremental data backups?
On 1 December 2015 at 08:21, Morten Olav Hansen wrote:
> http://dhis2.github.io/dhis2-docs/master/en/developer/html/ch01s21.html
>
> Basic docs are now
xml metadata (specifically for orgunits) is being used in Rwanda.
Technically it could be changed to use json, but some work would be
required. There is an xslt which is applied to the metadata to strip
some stuff as I recall.
On 17 December 2015 at 16:29, Lars Helge Øverland
Second approach sounds most painless to me.
A 3rd approach (shooting from the hip) is to create a postgres trigger
function to generate codes on insert.
On 24 November 2015 at 15:27, Carl Leitner wrote:
> Hi all,
> We are looking at using DHIS2 to manage facilities for a
Welcome Ken!
On 19 November 2015 at 20:56, Jim Grace wrote:
> Hi All,
>
> Ken Haase has just joined the team as a DHIS 2 developer, working through
> HISP US. Among other things, Ken has a PhD in Artificial Intelligence from
> MIT and has taught at the MIT Media Lab. He also
That ldap.url looks like the http url of your php ldap frontend web
application. You need to point it at the running ldap service not the php
web interface.
On 8 June 2016 at 07:00, Chameera Mirihella wrote:
> Hi Team,
>
> I am trying to configure a dhis instance
If you have your source data in postgresql, you can also use the following
handy function to generate uids directly from the database:
CREATE OR REPLACE FUNCTION uid()
RETURNS text AS $$
SELECT substring('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
FROM (random()*51)::int +1 for
Gerald, are you asking how to do it or do you want someone to write a
script for you?
On 14 January 2016 at 11:09, gerald thomas wrote:
> Dear All,
> Please can someone help as per subject???
>
> On 1/8/16, gerald thomas wrote:
>> Dear All,
>>
Calle
Here's another take on the problem.
Imagine you had a separate "metadata" instance (could variously be
called facility registry, data dictionary, indicator registry etc).
Basically a dhis2 instance with no data. And a staging instance which
contains just a copy of the metadata instance.
Yes firing off arbitrary javascript is not a good thing.
It should probably be filtered on input and escaped on output though
opinions vary a bit on approaches. I think these sorts of issues were
being targeted in the new metadata maintenance app.
On 25 February 2016 at 08:51, Knut Staring
Very odd misconfiguration error for nginx though. Did you install
through some package manager (apt, yum, ...) or was this manually
unpacked and configured? If there is an error in the standard ubuntu
install for example, its an important issue to be aware of.
On 18 February 2016 at 11:12, Olav
I see the code for parsing lastUpdated on the event controller is not
the same as on the metadata filtered export. Carl, you are right that
currently the format is restricted to the patterns shown in the
manual. It would be trivial to add support for ISO8601 timestamp.
Can you make a blueprint
And this Chinese supplier might well be the way around any issues
arising from US sanctions :-)
On 14 March 2016 at 12:30, Steven Uggowitzer wrote:
> Thanks David,
>
> For those of you looking for another (probably less secure) free option,
> there is also WoSign
Thanks David. That's really useful. Removing credit card
transactions from the process of getting certs signed is going to be
very valuable.
Note from here
(https://community.letsencrypt.org/t/certificates-for-us-sanctioned-countries/1223)
that there might still be some issues in some US
One other quick thought that is easy to test and eliminate. Postgres
out of memory errors on restore can also result from a corrupted dump
file. It might be worthwhile to check on another system that your
dump is good.
On 18 March 2016 at 17:08, Bob Jolliffe <bobjolli...@gmail.com> wrote
Ah you are on Windoze. I also don't have much real experience of
running dhis2 other than on linux, but it strikes me that
(i) 4G machine is small but should still "work"
(ii) the databse size you are talking about 100m is quite small and
the restore operation should not be consuming vast amounts
Hi
In case anyone has noticed and started to panic, there are some new
openssl vulnerabilities, which might effect your nginx installations.
http://www.infoq.com/news/2016/03/two-new-openssl-flaws?utm_source=infoqWeeklyNewsletter_medium=WeeklyNL_EditorialContent_development_campaign=03082016news
Hi Lars
I know that a lot of production servers are running postgres 9.3
(default with ubuntu 14.04).
The instructions assume postgres 9.4 is used. I know they could
upgrade, but I wonder has anybody tested the gis extensions with 9.3?
Regards
Bob
On 25 April 2016 at 18:38, Lars Helge
ding to the docs, PostGIS 2.2 works against PostgreSQL 9.1
> and later:
>
>
> http://postgis.net/docs/manual-2.2/postgis_installation.html#install_requirements
>
> We have of course tested DHIS 2 against PostGIS 2.2, so this should be
> safe.
>
> Lars
>
>
>
>
Hi Ifeany
Can you give exact psql command you are using and the output?
If it is a plain text format you should be able to see the sql
commands in it by running a command like 'less backup.sql' (assuming
that is the name of your file). That is always good to verify that
what you have is indeed
atabase.
>
>
>
> Regards.
>
>
>
> C Eneja
>
>
>
> -- Original message--
>
> From: Dr. Ifeanyi Okoye
>
> Date: Thu, 19 May 2016 13:55
>
> To: 'Bob Jolliffe';'Johan Ivar Sæbø';
>
> Cc: 'dhis2-devs';CHIKWADO ENEJA;
>
> Subject:[SUSPEC
ating installing pgAdmin on the server then attempt to connect to it
> from a Windows pc. Do you think this is a good way to proceed?
>
>
>
> Regards,
>
> Eneja.
>
>
>
> -- Original message--
>
> From: Bob Jolliffe
>
> Date: Thu, 19 May 2016 14:3
ponsible for maintaining the
> server. Sadly, I have never had to do this kind of db restore before and
> neither has any member of the team.
>
>
>
> Any help would be great.
>
>
>
> -- Original message--
>
> From: Bob Jolliffe
>
> Date: Thu, 19
Hi Mohamed
Looking at the small snippet of log file, the dhis war file failed to
load. You are right that this has likely something to do with the
contents of your database, but the picture of your log file doesn't
give the required info. Would need to look at what is happening much
earlier in
801 - 900 of 961 matches
Mail list logo