Re: arts_dev.mi Digest, Vol 62, Issue 4

2024-05-23 Thread Patrick Eriksson via arts_dev.mi

Hi,

This information is in an email from Oliver April 17. I will forward it. 
We will likely send out a reminder/update next week.


/P

On 2024-05-23 17:09, Leo Pio via arts_dev.mi wrote:

Dear Patrick and Stefan,

it would be very useful to know from where the bus leaves on Tuesday 
morning June 4th at 10:00 AM. An easy answer could be from the bus 
station, but just to be sure to reach the right place. Thanks.


Best,
Leo Pio





arts_dev.mi-requ...@lists.uni-hamburg.de ha scritto:


Send arts_dev.mi mailing list submissions to
arts_dev.mi@lists.uni-hamburg.de

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.uni-hamburg.de/mailman/listinfo/arts_dev.mi
or, via email, send a message with subject or body 'help' to
arts_dev.mi-requ...@lists.uni-hamburg.de

You can reach the person managing the list at
arts_dev.mi-ow...@lists.uni-hamburg.de

When replying, please edit your Subject line so it is more specific
than "Re: Contents of arts_dev.mi digest..."


Today's Topics:

   1. Kristineberg ARTS workshop 2024, register now (Stefan Buehler)


--

Message: 1
Date: Tue, 30 Jan 2024 10:11:44 +0100
From: Stefan Buehler 
To: ARTS Users List , ARTS
Developers List , RARE Group List
, Jing Feng
, Eli Mlawer , Brian
Medeiros , David Paynter ,
Paulina Czarnecki , Raymond Menzel - NOAA
Affiliate , Robert Pincus
, Franz Schreier ,
"Stevens, Bjorn" , "Kluft, Lukas"
, Claudia Emde
, David Ashmore
, "Gambacorta, Antonia"
, ,
, 
Subject: Kristineberg ARTS workshop 2024, register now
Message-ID: <5470062c-992d-4253-a3ba-51fd49e64...@uni-hamburg.de>
Content-Type: text/plain; charset="utf-8"

Dear all,

summer 2024 is drawing closer, and so is the highlight of this summer, 
the ARTS radiative transfer workshop at Kristineberg research station, 
on the Swedish west coast.


The workshop will be on June 4-7, 2024 (from noon to noon). The target 
audience are users and developers of the atmospheric radiative 
transfer simulator ARTS, and also anyone interested in spectroscopy or 
radiation that can give us new impulses.


Anticipated topics are:
Gas absorption
Scattering
Surface interaction
The shortwave side of things (since this is new in ARTS)
ARTS applications
New sensors

We anticipate that there will be dedicated talks sessions on these 
topics, plus posters, and also space for informal discussion and 
practical help with ARTS. Please indicate your contribution(s) with 
the registration, we will then compile an explicit agenda.


Some practicalities:

There will be a free bus transfer from G?teborg on June 4 at 10:00, 
and back to G?teborg on June 7, arriving approximately at 15:00. The 
workshop fee is around 460? all inclusive if you stay at the research 
station, and approximately 280? (including lunch and dinner) if you 
stay at one of the nearby hotels (plus the cost of the hotel). You pay 
the fee directly to the research station's staff (only card payment). 
The number of participants is limited to 50, due to the size of the 
bus, and the number of beds at the research station itself is limited 
to 45, so register soon if you want to take part!


Note also that there are only a few single rooms at the research 
station, so most people staying on-site will have to share a double 
room. The registration form allows you to indicate if room sharing is 
ok for you or not.


Registration: https://www.mi.uni-hamburg.de/arts2024

Registration deadline: February 29

Some useful links:

Kristineberg marine research station:
https://www.gu.se/en/kristineberg

Hotels within walking distance (Gullmarsstrand, Slipens):
https://gullmarsstrand.se/en/hotel/
https://www.slipenshotell.se/ (link seems not to work for some people, 
alternative: https://www.booking.com/Share-8DAo0f )


Hoping to meet you in June!

Patrick and Stefan
-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4414 bytes
Desc: S/MIME digital signature
URL: 



--

Subject: Digest Footer

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://lists.uni-hamburg.de/mailman/listinfo/arts_dev.mi


--

End of arts_dev.mi Digest, Vol 62, Issue 4
**






Re: Surface properties

2024-01-19 Thread Patrick Eriksson

Leo,

Assuming you are using pyarts, you find documentation here:

https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.GriddedField2.html#pyarts.arts.GriddedField2

See especially point 3 under __init__.


For completeness, if using xarray here is another option:

https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts_ext.GriddedFieldExtras.from_xarray.html#pyarts.arts_ext.GriddedFieldExtras.from_xarray

Bye,

Patrick


On 2024-01-18 17:21, leopio.dadde...@artov.isac.cnr.it wrote:

Dear Patrick,

many thanks for your reply. Anyway, Ithink I was not too clear. I am 
working with matrix skin temperature (as well as for wind at 10 m or 
land/ocean mask). I read both skin temperature and wind from ERA5 data, 
so the data change according to my needs (obviously, the land/ocean mask 
is always the same). Your code is designed only for scalar constant skin 
temperature value. Is it correct?


Thanks,
Leo Pio





Dear Leo Pio,

Below, you find Python code that you can use if you are working with 
scalar skin_temperatures. Otherwise, I assume this is enough for you 
to make a more general method.


Bye,

Patrick


def GriddedField2GloballyConstant(
    ws: Workspace,
    name: str,
    value: float,
) -> None:
    """
    Sets a WSV of type GriddedField2Create to hold a constant value.

    The WSV is assumed to represent geographical data, and the 
dimensions are set

    to Latitude and Longitude. The data are defined to cover the complete
    planet.

    :param ws: Workspace.
    :param name:   Name of WSV to fill.
    :param value:  Fill value.
    """

    gf2 = pa.GriddedField2()
    gf2.gridnames = ["Latitude", "Longitude"]
    gf2.grids = [np.array([-90, 90]), np.array([-180, 360])]
    gf2.data = np.full((2, 2), value)
    gf2.name = "Generated by easy arts function gf2_constant_set"
    setattr(ws, name, gf2)


On 2024-01-17 10:08, leopio.dadde...@artov.isac.cnr.it wrote:

Dear ARTS community,

within my simulation, I am setting the surface properties. In 
particular, when I use Tessem and Telsem model to calculate 
emissivity and reflectivity of ocean and land, respectively, I need 
as input the wind at 10m, the skin temperature, a land/ocean mask 
(among the others). I read these variable from NetCDF files using 
python (i.e. NetCDF4 and Numpy).
In the related agenda, I use "InterpGriddedField2ToPosition" which 
requires a "GriddedField2" variable (i.e. skin temperature, or wind) 
as input. To create this variable, I do the following (for instance 
eith skin temperature):


ws.GriddedField2Create("SkinTemperature")
ws.Copy(ws.skinTemperature,skinTemperature)

At this point, I get the following error:
"Could not convert input [here there are the values of the skin 
temperature matrix] to expected group GriddedField2."


Where am I wrong? "Copy" is a method to fill a GriddedField2 variable.

I hope I was clear, any help is welcomed. Thanks.

Leo Pio











Re: Surface properties

2024-01-17 Thread Patrick Eriksson

Dear Leo Pio,

Below, you find Python code that you can use if you are working with 
scalar skin_temperatures. Otherwise, I assume this is enough for you to 
make a more general method.


Bye,

Patrick


def GriddedField2GloballyConstant(
ws: Workspace,
name: str,
value: float,
) -> None:
"""
Sets a WSV of type GriddedField2Create to hold a constant value.

The WSV is assumed to represent geographical data, and the 
dimensions are set

to Latitude and Longitude. The data are defined to cover the complete
planet.

:param ws: Workspace.
:param name:   Name of WSV to fill.
:param value:  Fill value.
"""

gf2 = pa.GriddedField2()
gf2.gridnames = ["Latitude", "Longitude"]
gf2.grids = [np.array([-90, 90]), np.array([-180, 360])]
gf2.data = np.full((2, 2), value)
gf2.name = "Generated by easy arts function gf2_constant_set"
setattr(ws, name, gf2)


On 2024-01-17 10:08, leopio.dadde...@artov.isac.cnr.it wrote:

Dear ARTS community,

within my simulation, I am setting the surface properties. In 
particular, when I use Tessem and Telsem model to calculate emissivity 
and reflectivity of ocean and land, respectively, I need as input the 
wind at 10m, the skin temperature, a land/ocean mask (among the others). 
I read these variable from NetCDF files using python (i.e. NetCDF4 and 
Numpy).
In the related agenda, I use "InterpGriddedField2ToPosition" which 
requires a "GriddedField2" variable (i.e. skin temperature, or wind) as 
input. To create this variable, I do the following (for instance eith 
skin temperature):


ws.GriddedField2Create("SkinTemperature")
ws.Copy(ws.skinTemperature,skinTemperature)

At this point, I get the following error:
"Could not convert input [here there are the values of the skin 
temperature matrix] to expected group GriddedField2."


Where am I wrong? "Copy" is a method to fill a GriddedField2 variable.

I hope I was clear, any help is welcomed. Thanks.

Leo Pio







Re: [EXTERNAL] [BULK] 3D MC

2023-12-06 Thread Patrick Eriksson

Ian,

Thanks for the input. Great that you have stress-tested MC. Too bad that 
it revealed a limitation.


Good suggestion about iyMC. Today it would not be possible to do the 
random sampling from yCalc, it would require information on the sensor 
not at hand inside yCalc today. But we are planning to redesign the way 
the sensor is described, and this should be considered.


Not totally sure exactly what you mean with using MC sampled antenna 
pattern more broadly, but I tend to agree. It would be good if there 
would be mechanisms to give monochromatic pencil beam calculations some 
width in frequency and space. It would speed up simulations of 
observations. As example, I have been playing around with a scheme to 
locally average the surface emissivity around the point you hit the 
surface, to make simulations in coastal areas faster.


And yes, the most tricky part is finding the time for the work.

Bye,

Patrick


On 2023-12-06 16:09, Adams, Ian S {he, him, his} (GSFC-6120) wrote:

Hi Stefan,

I have been contemplating changes to the MC codes. One thing we have found is 
that MCGeneral breaks down when Q starts to get large. We see unrealistic 
results at 684 GHz when using horizontally aligned particles with high aspect 
ratio. Yuli Liu, who is working with us now, did comprehensive analysis, and we 
believe that the issue is the way the backwards algorithm is using importance 
sampling to avoid the issue of inverting the extinction matrix; however, this 
approach neglects the mixing of I and Q. I believe this is a simple fix.

The other issue is that MCGeneral is not very ARTS-like. Looking at the way it is 
structured, I think a better approach would be to have an iyMC that traces a single 
"photon," and yMC would integrate these individual results. Random sampling of 
both the antenna pattern and the bandwidth could be performed at this level. I also think 
that the MC sampled antenna pattern could be more widely useful across ARTS.

These papers provide an interesting curveball. The ARTS MC codes are 
particularly slow, and they are not optimized for optically thin or extremely 
optically thick atmospheres. We could look at using these libraries, or at 
least techniques, but I'm not sure how intensive such a restructuring of the 
code would be.

Of course, the tricky piece here is finding someone with the time to do this 
work. But, I think these changes would make the codes significantly more 
usable, and hopefully therefore used.

Cheers,
Ian

On 11/29/23, 11:22 AM, "arts_dev.mi on behalf of Stefan Buehler" 
mailto:arts_dev.mi-boun...@lists.uni-hamburg.de> on 
behalf of stefan.bueh...@uni-hamburg.de > wrote:


Dear all,


I stumbled accross this interesting paper on an open C library for particularly 
efficient MC calculations. Could this be the basis of ARTS 3D MC flux and 
heating rate calculations? Using MC sampling also for the spectral dimension, 
to be efficient, as in the second paper, which is also impressive, I think. 
They use MC sampling even for the spectral lines, if I got it right! (Basically 
treating each transition as if it were its own absorption species.)


/Stefan


https://www.dropbox.com/scl/fi/smsisfgc2it3sx4gov970/J-Adv-Model-Earth-Syst-2019-Villefranque-A-Path-E2-80-90Tracing-Monte-Carlo-Library-for-3-E2-80-90D-Radiative-Transfer-in-Highly.pdf?rlkey=v5yvrm64fnljaf739j4ssllux&dl=0
 



https://www.dropbox.com/scl/fi/r1tm3jdzx57kb85nowmt0/Yaniss_ea_PNAS_2023_smi.pdf?rlkey=8d4a7rb4u8pehckawbfk08c9f&dl=0
 




Re: RTE_POS

2023-12-06 Thread Patrick Eriksson

Hi,

If your version has geo_pos_agenda, you should put geo_posEndOfPpath in 
that agenda.


If no such agenda, geo_posEndOfPpath should be placed inside iy_main_agenda.

In any case, you should not need to do extra calculations, y_geo should 
be set in a standard call of yCalc.


Bye,

Patrick

On 2023-12-06 15:44, leopio.dadde...@artov.isac.cnr.it wrote:

Patrick,

I followed you suggestion, very useful. I am able to get geo_pos (i.e. 
y_pos) but it has only NaNs. "geo_posEndOfPpath" needs as input "ppath", 
which I generate from "PpathCalc", which in turn requies (among the 
others) "rte_pos", "rte_los" and "rte_pos2".
Here my first doubt. "rte_pos2" should be the result of the combination 
of "rte_pos" and "rte_los". Anyway, I set rte_pos=sensor_pos (satellite 
position) and rte_los=[180,0] (that should be nadir looking). I set 
rte_pos2=[0,0,0] but I am totaly not sure about "rte_pos2". If you could 
shed light on this, it would be very useful for me.


Thanks,
Leo Pio


Leo,

If you want to know the complete path through the atmosphere, you can 
do as you outline. If you only are interested in where you end up at 
the surface, you can use the geo_pos mechanism. You need to set 
geo_pos by adding the WSM: geo_posEndOfPpath


Exactly how geo_pos is handled has been changed, and I don't remember 
exactly the status in v2.5.0. But I hope you can figure it out.


With this done, the "geo pos" comes out from yCalc as y_geo.

Please note that you get out proper lat and lon only if running 3D 
calculations. For 1D you bascially get some relative lat and lon.


Bye,

Patrick


On 2023-11-29 10:57, leopio.dadde...@artov.isac.cnr.it wrote:

Hi Richard,

many thanks for your answer. I try to answer to your questions.
I am using ARTS 2.5.0
My entry point is 'yCalc', you are correct. I have some Python script 
which call ARTS commands, so I would say that I run ARTS via custom 
language interface.
Currently, I am getting and saving 'sensor_pos' and 'sensor_los' 
(that match 'y_pos' and 'y_los' but are not the same, right?). But, 
if I understand well, you are saying that I can set 'rte_pos2' and 
'rte_los' equal to 'y_pos' and 'y_los' and then run 'ppathCalc'.


Best,
Leo Pio




Hi Leo,

What you have encountered can be shortly summarized as rte_pos only
existing inside the Agenda you call.  You don't have it at hand 
anywhere
else.  rte_pos also does not represent what you think it does, it is 
simply

a radiative transfer equation position and it can be anywhere inside or
outside of the atmosphere.

Before any other specific help can be given, you need to specify what
version of ARTS you are using?  Are you running ARTS via python or 
via the

custom language interface?  Is your entry point to the calculations via
`yCalc`?

Because those details matter for the answer you might need.  
Generally, if
you want to investigate the atmospheric path you are using, you will 
want
to generate a `ppath` and extract the relevant information.  The way 
to do
that depends on the answers above and any attempt to answer this 
without

first filling in these details will give details that are perhaps not
needed.

If you are running it via `yCalc`, you get `y_pos` and `y_los` as 
outputs.

Those can be used to generate `rte_pos{,2}` and `rte_los` required for
`ppathCalc` to run.  You can then extract the relevant information 
from the
generated `ppath` either via custom language commands or just by 
accessing

the data it holds in python.  The documentation for accessing data in
ppath for the latest version of ARTS available via conda-forge can 
be found

here:
https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.Ppath.html#pyarts.arts.Ppath

//Richard

Den tis 28 nov. 2023 kl 15:51 skrev 
:



Dear ARTS community,

I am a new user of ARTS. One of my tasks is to simulate passive
microwave radiometers onboard low Earth orbit satellite. To this end,
I would know which region of Earth surface the satellite is looking
at. I set the satellite position through "sensor_pos" and the line of
sight of the satellite through "sensor_los". When I try to get and
save on a XML file the geographical position for starting radiative
transfer calculation (i.e. rte_pos), I get the following error:

Method WriteXML needs input rte_pos but it is uninitialized.

Can anyone help me on this? Many thanks.

Best regards,
Leo Pio













Re: RTE_POS

2023-11-29 Thread Patrick Eriksson

Leo,

If you want to know the complete path through the atmosphere, you can do 
as you outline. If you only are interested in where you end up at the 
surface, you can use the geo_pos mechanism. You need to set geo_pos by 
adding the WSM: geo_posEndOfPpath


Exactly how geo_pos is handled has been changed, and I don't remember 
exactly the status in v2.5.0. But I hope you can figure it out.


With this done, the "geo pos" comes out from yCalc as y_geo.

Please note that you get out proper lat and lon only if running 3D 
calculations. For 1D you bascially get some relative lat and lon.


Bye,

Patrick


On 2023-11-29 10:57, leopio.dadde...@artov.isac.cnr.it wrote:

Hi Richard,

many thanks for your answer. I try to answer to your questions.
I am using ARTS 2.5.0
My entry point is 'yCalc', you are correct. I have some Python script 
which call ARTS commands, so I would say that I run ARTS via custom 
language interface.
Currently, I am getting and saving 'sensor_pos' and 'sensor_los' (that 
match 'y_pos' and 'y_los' but are not the same, right?). But, if I 
understand well, you are saying that I can set 'rte_pos2' and 'rte_los' 
equal to 'y_pos' and 'y_los' and then run 'ppathCalc'.


Best,
Leo Pio




Hi Leo,

What you have encountered can be shortly summarized as rte_pos only
existing inside the Agenda you call.  You don't have it at hand anywhere
else.  rte_pos also does not represent what you think it does, it is 
simply

a radiative transfer equation position and it can be anywhere inside or
outside of the atmosphere.

Before any other specific help can be given, you need to specify what
version of ARTS you are using?  Are you running ARTS via python or via 
the

custom language interface?  Is your entry point to the calculations via
`yCalc`?

Because those details matter for the answer you might need.  
Generally, if

you want to investigate the atmospheric path you are using, you will want
to generate a `ppath` and extract the relevant information.  The way 
to do

that depends on the answers above and any attempt to answer this without
first filling in these details will give details that are perhaps not
needed.

If you are running it via `yCalc`, you get `y_pos` and `y_los` as 
outputs.

Those can be used to generate `rte_pos{,2}` and `rte_los` required for
`ppathCalc` to run.  You can then extract the relevant information 
from the
generated `ppath` either via custom language commands or just by 
accessing

the data it holds in python.  The documentation for accessing data in
ppath for the latest version of ARTS available via conda-forge can be 
found

here:
https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.Ppath.html#pyarts.arts.Ppath

//Richard

Den tis 28 nov. 2023 kl 15:51 skrev :


Dear ARTS community,

I am a new user of ARTS. One of my tasks is to simulate passive
microwave radiometers onboard low Earth orbit satellite. To this end,
I would know which region of Earth surface the satellite is looking
at. I set the satellite position through "sensor_pos" and the line of
sight of the satellite through "sensor_los". When I try to get and
save on a XML file the geographical position for starting radiative
transfer calculation (i.e. rte_pos), I get the following error:

Method WriteXML needs input rte_pos but it is uninitialized.

Can anyone help me on this? Many thanks.

Best regards,
Leo Pio









Re: Error with OEM retrieval in ARTS

2023-06-15 Thread Patrick Eriksson

Stuart,

The built-in doc of OEM clarifies that x is both IN and OUT. But there 
is no explanation of what the input states mean. We need to work the 
documentation!


But there is some help in

/controlfiles/artscomponents/oem/TestOEM.arts

Here you find:

# x, jacobian and yf must be initialised (or pre-calculated as shown below)
#
VectorSet( x, [] )
VectorSet( yf, [] )
MatrixSet( jacobian, [] )


# Or to pre-set x, jacobian and yf
#
#Copy( x, xa )
#MatrixSet( jacobian, [] )
#AgendaExecute( inversion_iterate_agenda )


My memory is that if you leave x empty, it is set to xa. The other 
option is there to allow you to start the iteration from another state.


I don't think we have changed this recently. So rather strange that your 
old setup worked. Anyhow, I hope this clarifies how to remove the error.


Bye,

Patrick



On 2023-06-15 14:58, Stuart Fox wrote:

Hi developers,

I have an ARTS OEM retrieval set-up that used to work fine based on the 
ARTS trunk from Feb 2022, but recently I’ve updated to the latest 
development version of ARTS and when calling the workspace.OEM() method 
it fails with “Not initialised: x”. Any clues on how to fix this? I am 
initialising the retrieval with workspace.xaStandard(), so I think I am 
correctly initialising the value of xa – it’s not obvious to me why I 
should have to initialise x at all (since presumably it should always be 
set to the same value as xa to begin with?)


Thanks,

Stuart

Dr Stuart Fox  Radiation Research Manager
*Met Office* FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
Tel: +44 (0)1392 885197  Fax: +44 (0)1392 885681
Email: stuart@metoffice.gov.uk  Website: www.metoffice.gov.uk



Re: Fwd: [arts-users] ARTS ICI Cloud Simulations

2022-03-21 Thread Patrick Eriksson

Hi,

Sorry, I should have informed you. Kyle wrote to me on the side as well, 
and I there asked for more details and then answered separately.


Not perfect. Next time I will force him to take all on arts-users.

Bye,

Patrick



On 2022-03-21 11:46, stefan.bueh...@uni-hamburg.de wrote:

Hi all, is there anyone that can take this? Stefan


Anfang der weitergeleiteten Nachricht:

*Von: *Kyle Johnson >

*Betreff: **[arts-users] ARTS ICI Cloud Simulations*
*Datum: *13. März 2022 um 18:06:03 MEZ
*An: *arts_users...@lists.uni-hamburg.de 

*Antwort an: *kyle.johnso...@colorado.edu 



Hello,
I was wondering if you all had a version of the controlfile ICI 
simulation in ARTS that included clouds. My name is Kyle Johnson and I 
am a graduate student at CU Boulder. I have been using the ICI 
simulation in ARTS as the basis for an independent study project and 
need to add clouds in. I have tried adding clouds in on my end and 
have been unsuccessful.

Thank you for your time,
Kyle Johnson (he/him/his)
Graduate Student /University of Colorado Boulder/
___
arts_users.mi mailing list
arts_users...@lists.uni-hamburg.de 


https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_users.mi




Re: Fwd: Eradiate Workshop 2022

2022-03-01 Thread Patrick Eriksson

Stefan and all,

I can  not go, busy with teaching and to high overall load. From the 
Chalmers side, the best candidate is Vasilis. He has developed MC code 
for the optical region. He is now in Greece and has a relatively short 
travel. But he is employment at Chalmers ends June 30. So would be 
better if someone else could go.


Stefan: Do you know more about how they got funding for this? Sounds as 
we could promote ARTS as something similar for microwaves to IR. This 
looks at the type of funding we have been lacking.


Bye,

Patrick





On 2022-03-01 09:43, stefan.bueh...@uni-hamburg.de wrote:

Dear all,

I know Yves, so this is ligit. Perhaps someone from us should 
participate in the release workshop? This is perhaps similar to the 3-D 
Monte Carlo they do in Munich. But open. Focused on the solar spectral 
range, of course.


Stefan


Anfang der weitergeleiteten Nachricht:

*Von: *mailto:n...@eradiate.eu>>
*Betreff: **Eradiate Workshop 2022*
*Datum: *28. Februar 2022 um 16:58:41 MEZ
*An: *>


Dear Stefan Buehler,

You are receiving this email because you were identified as a 
radiative transfer model user or developer.


The development ofEradiate , a new 3D 
radiative transfer model, started in 2019 with the goal to create a 
novel simulation platform for radiative transfer applied to Earth 
observation. Eradiate intends to be highly accurate and uses advanced 
computer graphics software as its Monte Carlo ray tracing kernel. It 
provides a modern Python interface specifically designed for the 
integration in interactive computing environments. It is also free 
software licensed under the GNU Public License v3.


At the end of March 2022, Eradiate will be released to the public and 
open to contribution from users. On this occasion, the Eradiate team 
will organise a workshop, kindly hosted by ESA/ESRIN in Frascati on 
Tuesday March 29^th and Wednesday March 30^th , 2022. This workshop 
will be organised with a hybrid setup allowing for remote participants 
to attend. Participation to the workshop is open and you can register 
by replying to this email, providing the following information:


·First name
·Last name
·Contact email address
·Affiliation

·Whether you wish to join us in Frascati or prefer to attend remotely

Please be aware that the number of on-premises seats is limited, 
assigned on a first come, first served basis. Registration for 
on-premises participation will be closed on March 21^st , 2022.


The workshop announcement letter, with further information of the 
programme, is availablehere 
. 
You can alsoregister to our mailing list if 
you want to be updated about Eradiate in the future.


Kind regards,

Yves Govaerts, for the Eradiate Team





Re: Failing tests

2021-09-22 Thread Patrick Eriksson

Richard, Oliver,

Thanks for your clarifications. My calculations seem to work now.

Bye,

Patrick

On 2021-09-22 09:56, Richard Larsson wrote:
The 2.5 way of absorption lookup table calculations is being 
redesigned.  For now you need to manually define the agendas as you do.  
Add lines will be removed, at some point, from the xsec code.  The 
reason it's deprecated is that in normal calculations you should be 
putting the line calculations into the propagation matrix agenda.  
Lookup calculations are special here since they just do partial 
calculations.


//Richard


On Wed, Sep 22, 2021, 08:30 Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>> wrote:


Hi again,

Seems that I have found the reason to some of the failing tests "the
hard way". After spending time on some other failing calculations, I
have figured out that the default in ARTS gives absorption
lookup-tables
that miss all lines. For example, TestOdinSMR_1D uses

Copy(abs_xsec_agenda, abs_xsec_agenda__noCIA)

This agenda is defined as

AgendaSet( abs_xsec_agenda__noCIA ){
    abs_xsec_per_speciesInit
    abs_xsec_per_speciesAddConts
}

No inclusion of lines! Another Odin/SMR test uses

AgendaSet( abs_xsec_agenda ) {
    abs_xsec_per_speciesInit
    abs_xsec_per_speciesAddConts
    abs_xsec_per_speciesAddLines
}

and this works. Both tests generates abs tables.

I get a message that abs_xsec_per_speciesAddLines is deprecated. But
why
removed from the defaults for abs_xsec_agenda before the alternative is
in place?

Anyhow, how shall abs_xsec_agenda be defined to get correct abs tables
in v2.5?

Bye,

Patrick





 Forwarded Message 
Subject: Failing tests
Date: Tue, 21 Sep 2021 17:55:33 +0200
From: Patrick Eriksson mailto:patrick.eriks...@chalmers.se>>
To: ARTS Development List mailto:arts_dev.mi@lists.uni-hamburg.de>>

Hi all,

I have spent some time on trying to figure out how the changes in my
branch could have created some failing tests. But just run make
check-all with master and the same tests failed also there, so there
seem to be older issues.

The failed tests listed below. These issues under control? Noticed that
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which
can be OK. On the other hand, there were also tests failing with
deviations of 30-100 K.

Bye,

Patrick



The following tests FAILED:
          42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch
(Failed)
          43 -
python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
         152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D
(Failed)
         153 -
python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
         156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
         157 -
python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
         158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
         159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM
(Failed)
         180 -
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios
(Failed)
         181 -
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios

(Failed)
         184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA
(Failed)
         185 -
python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
         232 - arts.pyarts.pytest (Failed)






Fwd: Failing tests

2021-09-21 Thread Patrick Eriksson

Hi again,

Seems that I have found the reason to some of the failing tests "the 
hard way". After spending time on some other failing calculations, I 
have figured out that the default in ARTS gives absorption lookup-tables 
that miss all lines. For example, TestOdinSMR_1D uses


Copy(abs_xsec_agenda, abs_xsec_agenda__noCIA)

This agenda is defined as

AgendaSet( abs_xsec_agenda__noCIA ){
  abs_xsec_per_speciesInit
  abs_xsec_per_speciesAddConts
}

No inclusion of lines! Another Odin/SMR test uses

AgendaSet( abs_xsec_agenda ) {
  abs_xsec_per_speciesInit
  abs_xsec_per_speciesAddConts
  abs_xsec_per_speciesAddLines
}

and this works. Both tests generates abs tables.

I get a message that abs_xsec_per_speciesAddLines is deprecated. But why 
removed from the defaults for abs_xsec_agenda before the alternative is 
in place?


Anyhow, how shall abs_xsec_agenda be defined to get correct abs tables 
in v2.5?


Bye,

Patrick





 Forwarded Message 
Subject: Failing tests
Date: Tue, 21 Sep 2021 17:55:33 +0200
From: Patrick Eriksson 
To: ARTS Development List 

Hi all,

I have spent some time on trying to figure out how the changes in my 
branch could have created some failing tests. But just run make 
check-all with master and the same tests failed also there, so there 
seem to be older issues.


The failed tests listed below. These issues under control? Noticed that 
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which 
can be OK. On the other hand, there were also tests failing with 
deviations of 30-100 K.


Bye,

Patrick



The following tests FAILED:
 42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
 43 - python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch 
(Failed)
152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
153 - python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D 
(Failed)
156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
157 - python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
	180 - 
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)
	181 - 
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)

184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
185 - python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA 
(Failed)
232 - arts.pyarts.pytest (Failed)






Failing tests

2021-09-21 Thread Patrick Eriksson

Hi all,

I have spent some time on trying to figure out how the changes in my 
branch could have created some failing tests. But just run make 
check-all with master and the same tests failed also there, so there 
seem to be older issues.


The failed tests listed below. These issues under control? Noticed that 
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which 
can be OK. On the other hand, there were also tests failing with 
deviations of 30-100 K.


Bye,

Patrick



The following tests FAILED:
 42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
 43 - python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch 
(Failed)
152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
153 - python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D 
(Failed)
156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
157 - python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
	180 - 
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)
	181 - 
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)

184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
185 - python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA 
(Failed)
232 - arts.pyarts.pytest (Failed)






Re: ReadHITRAN

2021-09-20 Thread Patrick Eriksson

Stefan,

Yes, that sounds reasonable. I simply lagging behind the development and 
need to catch up in how we do things. I emailed Richard and Freddy on 
the side about some stuff.


But when you brought it up, is there a documentation on the replacement 
mechanism? In the email to R&F I also suggested a README in Artscat, to 
clarify the content in the folder.


Bye,

Patrick



On 2021-09-20 11:10, Stefan Buehler wrote:

Dear Patrick,

I think we should put ARTS’ own line catalog in the center wherever possible 
(which is based on converted current HITRAN). Use it, if you are happy with the 
parameters there. If you want other parameters, and there is a good reason for 
that, consider updating it. We have a mechanism to replace individual 
parameters there (and document those substitutions).

Stefan

On 20 Sep 2021, at 10:46, Patrick Eriksson wrote:


Richard,

Thanks for additional information. Seems that the take home message is that I 
should look at other ways to set up the calculations. I just picked up an old 
cfile, used that as a starting point and did not even consider alternatives to 
use ReadHITRAN.

Bye,

Patrick

On 2021-09-20 09:05, Richard Larsson wrote:

Hi Patrick,

We can of course optimize the reading routine but there's no point in doing 
that.  The methods that read external catalogs should only ever be used once 
per update of the external catalog, so it's fine if they are slow but not too 
slow.

New memory is allocated for every absorption line always.  This is because we 
keep line data local, and the model for the line shape and the local quantum 
numbers don't have to be known at compile-time.

Additionally, the line data is pushed into arrays, so they will double in size 
every time you reach the current size.

If we knew the number of lines and broadening species and local quantum 
numbers, then these allocations happen once for the entire band, but we don't 
in ReadHITRAN or any of the external reading routines.  So you will have 
many-many system calls asking for more memory.  This of course also means that 
you are over-allocating memory since that's how Arrays work in ARTS (because 
that's standard C++).  Again, this is also fine since the external catalog when 
read again will allocate only exactly what is required.

With hope,
--Richard

Den mån 20 sep. 2021 kl 08:09 skrev Patrick Eriksson mailto:patrick.eriks...@chalmers.se>> :

 Richard,

 Thanks for the clarification.

 Is the allocation of more memory done in fixed chunks? Or something
 "smart" in the process? If the former and the chunks are too small,
 then
 maybe I am doing a lot of reallocations. My impression was that memory
 usage increased quite monotonically, not in noticeable steps.

 If the lines have to be sorted into bands, then the complexity of the
 reading will increase in line with what I have noticed. And likely not
 much to do about it.

 Bye,

 Patrick



  > There are two possible slowdowns there could be still. One is
 that you
  > hit some line count where you need to reallocate the array of lines
  > because you have too many. The other is that the search for
 placing the
  > line in the correct band is slow when there are more bands to
 look through.
  >
  > The former would be just pure bad luck, so there's nothing to do
 about it.
  >
  > I would suspect the latter is your problem.  You need to search
 through
  > the existing bands for every new line to find where it belongs. 
Since
  > bands are often clustered closely together in frequency, this
 could slow
  > down the reading as you get more and more bands. A smaller frequency
  > range means fewer bands to look through.
  >
  > //Richard
  >
  > On Sun, Sep 19, 2021, 22:39 Patrick Eriksson
  > mailto:patrick.eriks...@chalmers.se>
 <mailto:patrick.eriks...@chalmers.se
 <mailto:patrick.eriks...@chalmers.se>>> wrote:
  >
  >     Richard,
  >
  >      > It's expected to take a somewhat arbitrary time.  It reads
 ASCII.
  >
  >     I have tried multiple times and the pattern is not changing.
  >
  >
  >      > The start-up time is going to be large because of having
 to find the
  >      > first frequency, which means you have to parse the text
 nonetheless.
  >
  >     Understood. But that overhead seems to be relatively small.
 In my test,
  >     it seemed to take 4-7 s to reach the first frequency. Anyhow,
 this goes
  >     in the other direction. To minimise the parsing to reach the
 first
  >     frequency, it should be better to read all in one go, and not
 in parts
  >     (which is the case for me).
  >
  >     Bye,
  >
  >     Patrick
  >



Re: ReadHITRAN

2021-09-20 Thread Patrick Eriksson

Richard,

Thanks for additional information. Seems that the take home message is 
that I should look at other ways to set up the calculations. I just 
picked up an old cfile, used that as a starting point and did not even 
consider alternatives to use ReadHITRAN.


Bye,

Patrick

On 2021-09-20 09:05, Richard Larsson wrote:

Hi Patrick,

We can of course optimize the reading routine but there's no point in 
doing that.  The methods that read external catalogs should only ever be 
used once per update of the external catalog, so it's fine if they are 
slow but not too slow.


New memory is allocated for every absorption line always.  This is 
because we keep line data local, and the model for the line shape and 
the local quantum numbers don't have to be known at compile-time.


Additionally, the line data is pushed into arrays, so they will double 
in size every time you reach the current size.


If we knew the number of lines and broadening species and local quantum 
numbers, then these allocations happen once for the entire band, but we 
don't in ReadHITRAN or any of the external reading routines.  So you 
will have many-many system calls asking for more memory.  This of course 
also means that you are over-allocating memory since that's how Arrays 
work in ARTS (because that's standard C++).  Again, this is also fine 
since the external catalog when read again will allocate only exactly 
what is required.


With hope,
--Richard

Den mån 20 sep. 2021 kl 08:09 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Richard,

Thanks for the clarification.

Is the allocation of more memory done in fixed chunks? Or something
"smart" in the process? If the former and the chunks are too small,
then
maybe I am doing a lot of reallocations. My impression was that memory
usage increased quite monotonically, not in noticeable steps.

If the lines have to be sorted into bands, then the complexity of the
reading will increase in line with what I have noticed. And likely not
much to do about it.

Bye,

Patrick



 > There are two possible slowdowns there could be still. One is
that you
 > hit some line count where you need to reallocate the array of lines
 > because you have too many. The other is that the search for
placing the
 > line in the correct band is slow when there are more bands to
look through.
 >
 > The former would be just pure bad luck, so there's nothing to do
about it.
 >
 > I would suspect the latter is your problem.  You need to search
through
 > the existing bands for every new line to find where it belongs. 
Since

 > bands are often clustered closely together in frequency, this
could slow
 > down the reading as you get more and more bands. A smaller frequency
 > range means fewer bands to look through.
 >
 > //Richard
 >
 > On Sun, Sep 19, 2021, 22:39 Patrick Eriksson
 > mailto:patrick.eriks...@chalmers.se>
<mailto:patrick.eriks...@chalmers.se
<mailto:patrick.eriks...@chalmers.se>>> wrote:
 >
 >     Richard,
 >
 >      > It's expected to take a somewhat arbitrary time.  It reads
ASCII.
 >
 >     I have tried multiple times and the pattern is not changing.
 >
 >
 >      > The start-up time is going to be large because of having
to find the
 >      > first frequency, which means you have to parse the text
nonetheless.
 >
 >     Understood. But that overhead seems to be relatively small.
In my test,
 >     it seemed to take 4-7 s to reach the first frequency. Anyhow,
this goes
 >     in the other direction. To minimise the parsing to reach the
first
 >     frequency, it should be better to read all in one go, and not
in parts
 >     (which is the case for me).
 >
 >     Bye,
 >
 >     Patrick
 >



Re: ReadHITRAN

2021-09-19 Thread Patrick Eriksson

Richard,

Thanks for the clarification.

Is the allocation of more memory done in fixed chunks? Or something 
"smart" in the process? If the former and the chunks are too small, then 
maybe I am doing a lot of reallocations. My impression was that memory 
usage increased quite monotonically, not in noticeable steps.


If the lines have to be sorted into bands, then the complexity of the 
reading will increase in line with what I have noticed. And likely not 
much to do about it.


Bye,

Patrick



There are two possible slowdowns there could be still. One is that you 
hit some line count where you need to reallocate the array of lines 
because you have too many. The other is that the search for placing the 
line in the correct band is slow when there are more bands to look through.


The former would be just pure bad luck, so there's nothing to do about it.

I would suspect the latter is your problem.  You need to search through 
the existing bands for every new line to find where it belongs.  Since 
bands are often clustered closely together in frequency, this could slow 
down the reading as you get more and more bands. A smaller frequency 
range means fewer bands to look through.


//Richard

On Sun, Sep 19, 2021, 22:39 Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>> wrote:


Richard,

 > It's expected to take a somewhat arbitrary time.  It reads ASCII.

I have tried multiple times and the pattern is not changing.


 > The start-up time is going to be large because of having to find the
 > first frequency, which means you have to parse the text nonetheless.

Understood. But that overhead seems to be relatively small. In my test,
it seemed to take 4-7 s to reach the first frequency. Anyhow, this goes
in the other direction. To minimise the parsing to reach the first
frequency, it should be better to read all in one go, and not in parts
(which is the case for me).

Bye,

Patrick



Re: ReadHITRAN

2021-09-19 Thread Patrick Eriksson

Richard,


It's expected to take a somewhat arbitrary time.  It reads ASCII.


I have tried multiple times and the pattern is not changing.


The start-up time is going to be large because of having to find the 
first frequency, which means you have to parse the text nonetheless.


Understood. But that overhead seems to be relatively small. In my test, 
it seemed to take 4-7 s to reach the first frequency. Anyhow, this goes 
in the other direction. To minimise the parsing to reach the first 
frequency, it should be better to read all in one go, and not in parts 
(which is the case for me).


Bye,

Patrick


ReadHITRAN

2021-09-19 Thread Patrick Eriksson

Hi all,

I have noticed that the time used by ReadHITRAN is not linear with the 
width of the frequency range. For example, to read all lines (of five 
main species) between 800 and 840 cm-1 used 90 s, while reading 800-820 
and 820-840 cm-1) together used 57 s.


Is this expected?

(The above uses 1-2% of my RAM).

Bye,

Patrick


Re: VMRs

2021-09-16 Thread Patrick Eriksson

Stefan,


For HSE it is up to the user to apply this "fine tuning" or not. This including 
to include adding call of the HSE method in OEM iterations, to make sure that HSE is 
maintained after an iteration. The VMR rescaling should also be included in the iteration 
agenda, if the retrieval can change H2O close to the ground. That is, a VMR rescaling 
would not be something completely new, as I see it.


It seems to me that this leads into a logical loop: If you retrieve H2O and O3, 
and the retrieved H2O value directly affects the O3 value due to the rescaling. 
As you write, in principle, this should even be in the Jacobian, as a 
cross-term. With more water, the lines of all other gases get weaker.

It is true that if there is more of the one there has to be less of the other, 
but argh, this is so ugly.

Perhaps the deeper reason why AER went for the other definition? If VMRs refer 
to the dry pressure, and the dry gases are all either quite constant or very 
rare, then retrievals are more independent.


To switch to the other definition, than the VMR of e.g. N2 would stay 
the same in a retrieval of H2O. This is why I initially found this 
option nice. But it would not change the physics and the 
cross-dependences between species would not disappear. You have to 
remember that VMR is a relative measure. To get the absolute amount of 
the species, you still need to calculate the partial pressures. That is 
you need to "distribute" the total pressure among the gases, and as I 
understand it a general expression for this would be:


p_i = VMR_i * p / VMR_sum

where p_i is partial pressure of species i, VMR_i its VMR, p pressure 
and VMR_sum the sum of all VMRs.


Our present definition is based on that VMR_sum=1, while in the 
alternative version it will deviate, and with more H2O VMR_sum will 
increase which will affect p_i even if VMR_i is unchanged.


Or do I miss something?

Bye,

Patrick


Re: VMRs

2021-09-16 Thread Patrick Eriksson

Hi again,

Great that we agree on the problem. OK, let's keep the present 
definition of VMR (that it refers to sum of all gases, not just 
"constant" ones).


We should then for sure introduce a rescaling method (or maybe several). 
I expressed myself poorly, I rather meant that introducing such a method 
is not a fully complete solution, if we consider the "fine print". What 
I had in mind is the Jacobian, the coupling between variable and 
constant gases should theoretically go into the expressions for the 
Jacobian. But that's just a "smart" comment. I don't say that it should 
be implemented, which would be a pain. Then Stuart's comment is more 
relevant, this could have consequences for the values given to 
absorption models.


To make the rescaling method easy to apply, I would suggest to make one 
specific for Earth, that automatically base the rescaling on H2O. There 
could be a generic one as well.


Yes, this puts some weight on the user. Hydrostatic equilibrium (HSE) is 
a similar case. Input profiles do not always fulfil HSE (this is the 
case for Fascod, if not a mater of geopotential vs geometric 
altitudes?). For HSE it is up to the user to apply this "fine tuning" or 
not. This including to include adding call of the HSE method in OEM 
iterations, to make sure that HSE is maintained after an iteration. The 
VMR rescaling should also be included in the iteration agenda, if the 
retrieval can change H2O close to the ground. That is, a VMR rescaling 
would not be something completely new, as I see it.


Bye,

Patrick


On 2021-09-16 15:01, Stefan Buehler wrote:

Hej,


With our present definition of VMRs, we agree on that having 78% N2, 21% O2 and 
e.g. 3% H2O is unphysical? That with a lot of H2O (or any other non-fixed gas) 
the standard values of the fixed gases should be scaled downwards. In the 
example above, with 0.97. Do you agree?


Yes, I agree.


It seems a bit weird to me to use this definition at the (low) level of the 
absorption routines. Perhaps one solutions would be to have an option for this 
behaviour when ingesting concentration profile data? Perhaps by passing in a 
list of species that should be considered as not adding to the denominator for 
the VMR definition.


If we agree on the above, then this is the simplest (but not most theoretically 
correct) solution.


Why not correct?

/Stefan



Re: VMRs

2021-09-16 Thread Patrick Eriksson

Hej,

No time for writing a lot. Right now just want to make a basic check of 
our understanding.


With our present definition of VMRs, we agree on that having 78% N2, 21% 
O2 and e.g. 3% H2O is unphysical? That with a lot of H2O (or any other 
non-fixed gas) the standard values of the fixed gases should be scaled 
downwards. In the example above, with 0.97. Do you agree?



It seems a bit weird to me to use this definition at the (low) level of the 
absorption routines. Perhaps one solutions would be to have an option for this 
behaviour when ingesting concentration profile data? Perhaps by passing in a 
list of species that should be considered as not adding to the denominator for 
the VMR definition.


If we agree on the above, then this is the simplest (but not most 
theoretically correct) solution.


Bye,

Patrick









Note that for once the special thing about water is here not the fact that it’s 
condensible, I think, but just that there is so much of it, and at the same 
time very variable. Other gas species have also very variable concentrations, 
but it doesn’t matter for the total pressure.

All the best,

Stefan

On 15 Sep 2021, at 20:19, Patrick Eriksson wrote:


Stefan,

Neither I had considered this definition of VMR. But would it not make sense to 
follow it? Then a statement that the atmosphere contains 20.95% oxygen makes 
more sense. You yourself pointed at that it would make sense to scale N2 and O2 
for low humid altitudes, where the amount of water can be several %. In code 
preparing data for ARTS I normally do this adjustment. Should be more correct!?

A problem is to define what is the wet species when we go to other planets. Or 
maybe there are even planets with several wet species?

That is, I would be in favour to define VMR with respect to dry air, if we can 
find a manner to handle other planets.

Bye,

Patrick



On 2021-09-15 18:27, Stefan Buehler wrote:

Dear all,

Eli Mlawer brought up an interesting point in some other context:


we recently had a LBLRTM user get confused on our vmr, which is amount_of_gas / 
amount_of_dry_air. They weren’t sure that dry air was the denominator instead 
of total air.  I’m too lazy to look at the link above that @Robert Pincus 
provided, but I hope it is has dry air in the denominator.  So much easier to 
simply specify evenly mixed gases, such as 400 ppm CO2 (and, 20 years from now, 
500 ppm CO2).


I’ve never considered that one could define it this way. Perhaps this 
convention explains, why VMRs in climatologies like FASCOD add up so poorly to 
1.

I’m not suggesting that we change our behaviour, but want to make you aware 
that this convention is in use. (Or perhaps you already were, and just I missed 
it.)

All the best,

Stefan



Re: VMRs

2021-09-15 Thread Patrick Eriksson

Stefan,

Neither I had considered this definition of VMR. But would it not make 
sense to follow it? Then a statement that the atmosphere contains 20.95% 
oxygen makes more sense. You yourself pointed at that it would make 
sense to scale N2 and O2 for low humid altitudes, where the amount of 
water can be several %. In code preparing data for ARTS I normally do 
this adjustment. Should be more correct!?


A problem is to define what is the wet species when we go to other 
planets. Or maybe there are even planets with several wet species?


That is, I would be in favour to define VMR with respect to dry air, if 
we can find a manner to handle other planets.


Bye,

Patrick



On 2021-09-15 18:27, Stefan Buehler wrote:

Dear all,

Eli Mlawer brought up an interesting point in some other context:


we recently had a LBLRTM user get confused on our vmr, which is amount_of_gas / 
amount_of_dry_air. They weren’t sure that dry air was the denominator instead 
of total air.  I’m too lazy to look at the link above that @Robert Pincus 
provided, but I hope it is has dry air in the denominator.  So much easier to 
simply specify evenly mixed gases, such as 400 ppm CO2 (and, 20 years from now, 
500 ppm CO2).


I’ve never considered that one could define it this way. Perhaps this 
convention explains, why VMRs in climatologies like FASCOD add up so poorly to 
1.

I’m not suggesting that we change our behaviour, but want to make you aware 
that this convention is in use. (Or perhaps you already were, and just I missed 
it.)

All the best,

Stefan



Re: [arts-dev] 20 Years of ARTS Development

2020-03-11 Thread Patrick Eriksson

Hi all,

And I take the opportunity to thank all that have contributed to ARTS during these first 
20 years! This with a special thanks to Oliver that has kept a watchful eye on ARTS from 
day one.


Cheers,

Patrick



On 2020-03-11 14:59, Oliver Lemke wrote:

Hi all,

20 years ago, on March 11 in 2000, ARTS was born and development started. As a little 
celebration, I put together an animation to compress these 20 years into a 3 minute video:


https://youtu.be/rGQDuLs2-5c

Looking forward to the next 20 years. :-)

Have fun,
Oliver


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Fwd: Clouds in ARTS

2019-11-05 Thread Patrick Eriksson

Dear Frank Werner,

It makes me happy to hear that your are integrating ARTS into your code 
base. When we started ARTS, limb sounding was one of the main 
applications so it is very nice if ARTS gets used on limb sounders 
beside Odin/SMR.


Let me start by asking if you are using v2.2 or a relatively recent v2.3?

If v2.2: Then you have to create the "pnd_field" yourself and import 
data with e.g. ParticleTypeAdd.


If v2.3: In this version you can work with particle size distributions 
(PSDs). Be aware that there was a first system, that now is replaced. 
The later version operates with particle_bulkprop_field. With this 
system you can give ARTS IWC-values and select some PSDs, such as the 
MH97 one that both Dong Wu and I have used for limb retrievals.


In both cases, scattering data you either generate inside ARTS with 
T-matrix or take it from our "scattering database".


Some brief comments. If you tell me what version you actually are using, 
I can provide more detailed help.


Bye,

Patrick


On 2019-11-04 22:07, Claudia Emde wrote:

Dear Arts-Developers,

here is a question about how to include clouds in ARTS. Since I am not 
up-to-date, I forward this message to you.


Best regards,
Claudia


 Forwarded Message 
Subject:Clouds in ARTS
Date:   Mon, 4 Nov 2019 17:40:47 +
From:   Werner, Frank (329D) 
To: claudia.e...@lmu.de 



Hi Claudia,

The MLS satellite team here at JPL has recently started using ARTS, in 
addition to the in house radiative transfer algorithms. Michael Schwartz 
and I have been the two people playing around with ARTS, trying to 
incorporate it as another RT option in our code base. We are almost at 
the point where we have ARTS as another plug-and-play option for our 
retrievals.


One of the last remaining issues is handling of clouds. As far as I can 
tell, all I have to do is turn the ‘cloudbox’ on and add hydro meteors 
via ‘ParticleTypeAdd’. Is there a simple example for some cloud 
absorption you can send me? It doesn’t need to be super realistic or 
anything. As far as I can tell, the workspace method needs scattering 
properties and number densities. All I could find in the standard ARTS 
data sets is the Chevallier_91L stuff in 
‘/controlfiles/planets/Earth/Chevallier_91L/’.


Again, a simple example of some cloud absorption would be appreciated. 
Thanks for your help!


Best wishes,

Frank

--

Frank Werner
Mail Stop 183-701, Jet Propulsion Laboratory
4800 Oak Grove Drive, Pasadena, California 91109, United States
Phone: +1 818 354-1918

Fax: +1 818 393 5065


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Save the date: ARTS workshop June 2020

2019-10-03 Thread Patrick Eriksson

Dear ARTS friends,

It's time for a new ARTS workshop. The workshop will be similar to the 
old ones, but this time we have also something to celebrate. The ARTS 
project is approaching an age of 20 years! And if all goes well, we will 
announce ARTS-3 some time before the workshop.


The workshop will be held June 8-11, 2020. The venue will again be 
Kristineberg Marine Research Station, on the west coast of Sweden. You 
will need to be in Gothenburg around 14.00 June 8, and be back in 
Gothenburg around 15.00 June 11.


Mark this time period in your calendar. The invitation will be sent out 
in January.


If you are not familiar with these workshops, see:
http://www.radiativetransfer.org/events

(We are aware of that the IPWG and IWSSM workshops just were announced 
to be June 1-5. This is unlucky but we can not move he ARTS workshop as 
Kristeneberg is fully booked.)


Kind regards,

Stefan and Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Adding new PSDs to ARTS

2019-09-27 Thread Patrick Eriksson

Stuart,

Good that you emailed. There is a old and new system for PSDs. 
pnd_fieldCalcFromscat_speciesFields uses the old system. We have now 
decided that the old system will be removed. Has not yet happen due to 
lack of time. The new system supports retrievals, in contrast to the old 
one.


There could be a lack of a demo case for the new system.
The new PSDs should all be found in m_psd.cc. The new system uses 
pnd_fieldCalcFromParticleBulkProps


This is all I have time to write now. I can try to explain more 
carefully on Monday. I can also send you example cfiles.


Bye,

Patrick


On 2019-09-27 17:25, Fox, Stuart wrote:

Hi all,

I would like to add some further PSD options to ARTS (specifically ones 
for rain and graupel that are consistent with the single-moment schemes 
used in the Met Office NWP model). I will be making use of them by 
defining hydrometeor mass density fields and then using 
pnd_fieldCalcFromscat_speciesFields.


However, I’m a little bit confused as to the correct way to implement 
these (and there appears to be incomplete implementations of some of the 
existing options e.g. Abel & Boutle 2012 for rain, which happens to be 
one of the ones I’d like to use). The guidance in the arts developer 
guide also doesn’t seem to follow what’s actually in the code for some 
cases.


So far I have:

-added logic to pnd_fieldCalcFromscat_speciesFields to call pnd_fieldXXX 
for each of the new parametrizations


-updated the documentation for pnd_fieldCalcFromscat_speciesFields in 
methods.cc to include the new parametrizations


-added new pnd_fieldXXX function to microphysics.cc (not cloudbox.cc as 
suggested by the developer guide) to calculate the pnd field according 
to the raw PSD function psd_XXX and the scattering meta-data (and added 
this to microphysics.h as well)


-added new “raw” psd calculation psd_XXX to psd.cc

This seems to be all that is required to make my use-case work, but I 
can see that it is not quite complete. In particular, I believe that I 
should add a new workspace method dNdD_XXX to allow a direct calculation 
of the raw PSD. Should this go in m_microphysics.cc (and be added to 
methods.cc)? This seems to be where other ones are, but again the 
developer guide suggests it should be in m_cloudbox.cc.


What is the purpose of the psd_XXX functions in m_psd.cc? Are these also 
required?


Thanks for your help,

Stuart

Dr Stuart Fox  Radiation Research Manager

*Met Office*FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
Tel: +44 (0)330 135 2480  Fax: +44 (0)1392 885681
Email: stuart@metoffice.gov.uk  Website: www.metoffice.gov.uk
See our guide to climate change at 
http://www.metoffice.gov.uk/climate-change/guide/



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] About error checking

2019-03-26 Thread Patrick Eriksson

Richard,

Now you are a bit all over the place. Yes of course, things can be 
handled in a more nice way if we introduce new features.


ARTS is almost 20 years! When we started ARTS the aim was in fact to use 
as few "groups" as possible. And I would say that we kept that rule a 
long time, but lately things have changed. You have added some new 
groups during the last years and OEM resulted in some other new ones. 
Before we were picky about that each group could be imported and 
exported to files, and could be printed to screen. I don't know if this 
is true for all newer groups.


I don't say that the new groups are bad. For sure we needed a special 
group for covariance matrices, as example. But as usual I would prefer 
that we have a clear strategy for the future. And there should be 
documentation.


I am prepared to discuss this, but not by email. It just takes too much 
time, and these email discussions tend to become a moving target. But I 
could try to find a time for a video/tele conf if there is a general 
feeling that we should add more groups now or in a close future.


Bye,

Patrick






On 2019-03-26 11:51, Richard Larsson wrote:

Hi Patrick,



Den mån 25 mars 2019 kl 19:47 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Hi Richard,

I can agree  on that this is not always critical for efficiency as long
as the check is a simple comparison. But some checks are much more
demanding. For example, the altitudes in z_field should be strictly
increasing. If you have a large 3D atmosphere, it will be very
costly to
repeat this check for every single ppath calculation. And should
this be
checked also in other places where z_field is used? For example, if you
use iyIndependentBeamApproximation you will repeat the check as also
the
DISORT and RT4 methods should check this, as they can be called without
providing a ppath.


If a bad z_field can cause an assert today, then it has to be checked 
every-time it is accessed.


This problem seems simply to be a quick and somewhat bad original 
design (hindsight is 20/20, and all that).  To start with, if it has to 
be structured, then z_field is not a field.  It is as much a grid as 
pressure, so the name needs to change.


And since we have so many grids that demands a certain structure, i.e., 
increasing or decreasing values along some axis but perhaps not all, 
then why are these Tensors and Vectors that are inherently 
unstructured?  They could be classes of some Grid or StructuredGrid 
types.  You can easily design a test in such a class that makes sure the 
structure is good after every access that can change a value.  Some 
special access functions, like logspace and linspace, and 
HSE-regridding, might have to added to not trigger the check at a bad 
time, but not many.


Since, I presume, iyIndependentBeamApproximation only takes "const 
Tensor3& z_field" at this point, the current z_field cannot change its 
values inside the function.  However, since it is possible that the 
z_field in iyIndependentBeamApproximation is not the same as the z_field 
when ppath was generated, the size of z_field and ppath both has to 
checked in iyIndependentBeamApproximation and other iy-functions.


However, to repeat: If a bad z_field can cause an assert today, then it 
has to be checked every-time it is accessed.



Further, I don't follow what strategy you propose. The discussion
around
planck indicated that you wanted the checks as far down as possible.
But
the last email seems to indicate that you also want checks higher up,
e.g. before entering interpolation. I assume we don't want checks on
every level. So we need to be clear about at what level the checks
shall
be placed. If not, everybody will be lazy and hope that a check
somewhere else catches the problem.


There were asserts in the physics_funcs.cc functions.  Asserts that were 
triggered.  So I changed them to throw-catch.


I am simply saying that every function needs to be sure it cannot 
trigger any asserts.  Using some global magical Index is not enough to 
ensure that.


A Numeric that is not allowed to be outside a certain domain is a 
runtime or domain error and not an assert.  You either throw such errors 
in physics_funcs.cc, you make every function that takes t_field and 
rtp_temperature check that they are correct, or you create a special 
class just for temperature that enforces a positive value.  The first is 
easier.



In any case, it should be easier to provide informative error messages
if problems are identified early on. That is, easier to pinpoint the
reason to the problem.


I agree, but not by the magic that is *_checkedCalc, since it does not 
guarantee a single thing once in another function.

With hope,
//Richard

___
arts_dev.mi mailing list
arts_dev.mi@lists

Re: [arts-dev] About error checking

2019-03-26 Thread Patrick Eriksson

Hi Oliver,

Here I will only comment on point 1.

The aspect of running ARTS interactively was not pointed out before and 
then not considered in my answers. With that in mind it seems reasonable 
to replace most asserts with errors. (Most, as I agree that the index 
checking in matpack should be left).


So I am favor of your suggestion (without having time to consider the 
details right now). I have just one wish. Write down some instructions 
on how the new macros shall be used, and when assert possibly should be 
applied. We should document these things better in the future.


I suggest to add the instructions to the ARTS developer guide. There is 
already a small section on use of asserts (sec 1.6).


(To switch to the new macros can be a task for the "ARTS cleaning week" 
in September!)


Bye,

Patrick

On 2019-03-26 11:46, Oliver Lemke wrote:

Hi Patrick and Richard,

I think we're mixing up two issues with error handling in ARTS which should be 
discussed separately:

1. There are instances where ARTS currently exits non-gracefully which is 
painful esp. for python API users.

2. How to better organize where errors are checked and which checks can be 
skipped in release mode.

I will concentrate on point 1 in this email because that's what I think 
triggered the whole discussion. For users who mainly use the ARTS API, asserts 
and uncaught exceptions are painful. Since their occurrence leads to process 
termination, this means in case of the Python API, it will kill the whole 
Python process. This is very annoying if you're working in an interactive shell 
such as ipython, because your whole session will die if this occurs. Therefore, 
our main focus should first of all be on fixing these issues.

Currently, when we catch exceptions at higher levels such as main agenda 
execution, ybatch loop or the ARTS API interface, we only catch exceptions of 
type std::runtime_error. If an unforeseen std::exception from any library 
function is thrown, it'll lead to program termination. This issue is rather 
easy to solve by explicitly catching std::exception in addition to 
std::runtime_error in the appropriate places and handle them just as 
gracefully. I will take care of adding the code where necessary.

For assertions, the fix is a bit more involved. Since they can't be caught, we 
would need to replace them with a new mechanism. As the benchmarks have shown 
we won't lose (much) performance if we use try/catch blocks instead. There are 
very few exceptions such as index operators in matpack which should better be 
left alone.
After discussion yesterday with Stefan, we came up with the following proposal: 
Introduce a couple of convenience methods (macros) that can be used similar to 
how the standard assert works now:

ARTS_ASSERT(condition, errmsg)

This would work the same as 'assert' except that it will be throwing a 
runtime_error in case the condition is not fulfilled. Also, this statement will 
be skipped if ARTS is compiled in release mode. They could also be turned off 
in any configuration by the already existing cmake flag '-DNO_ASSERT=1'. If 
anyone feels that the current name of this option doesn't properly reflect its 
purpose, it can be renamed of course.

ARTS_THROW(condition, errmsg)

This will be the same as ARTS_ASSERT except that will always be active.

Both macros will take care of adding the function name, filename and linenumber 
to the error message.

More complex checks have to be implemented in a custom try/catch block of 
course. And blocks that should be possible to be deactivated should be placed 
inside a preprocessor block such as DEBUG_ONLY. Again here, if the name seems 
inappropriate, another macro that does the same thing with a better fitting 
name could be introduced. So we could have IGNORABLE as an alias to DEBUG_ONLY 
and -DIGNORE_ERRORS=1 as an alias for -DNO_ASSERT=1 . I don't see why we need 
both because in Release mode we would want to activate both options anyway, 
right?

With respect to point 2., I don't see yet a clear strategy or way to give a 
clear definition on which errors should be ignorable and which not. Since the 
past has clearly proven that is already difficult to decide when to use an 
assertion and when a runtime_error, I don't fancy the idea of introducing a 
third class of errors that's defined as 'Can be turned off if the user knows 
what he's doing'. Correct me if I'm wrong Richard, but that's basically what 
you want to achieve with the proposed 'IGNORABLE' flag?

Cheers,
Oliver



On 25 Mar 2019, at 19:47, Patrick Eriksson  wrote:

Hi Richard,

I can agree  on that this is not always critical for efficiency as long as the 
check is a simple comparison. But some checks are much more demanding. For 
example, the altitudes in z_field should be strictly increasing. If you have a 
large 3D atmosphere, it will be ver

Re: [arts-dev] About error checking

2019-03-25 Thread Patrick Eriksson

Hi Richard,

I can agree  on that this is not always critical for efficiency as long 
as the check is a simple comparison. But some checks are much more 
demanding. For example, the altitudes in z_field should be strictly 
increasing. If you have a large 3D atmosphere, it will be very costly to 
repeat this check for every single ppath calculation. And should this be 
checked also in other places where z_field is used? For example, if you 
use iyIndependentBeamApproximation you will repeat the check as also the 
DISORT and RT4 methods should check this, as they can be called without 
providing a ppath.


Further, I don't follow what strategy you propose. The discussion around 
planck indicated that you wanted the checks as far down as possible. But 
the last email seems to indicate that you also want checks higher up, 
e.g. before entering interpolation. I assume we don't want checks on 
every level. So we need to be clear about at what level the checks shall 
be placed. If not, everybody will be lazy and hope that a check 
somewhere else catches the problem.


In any case, it should be easier to provide informative error messages 
if problems are identified early on. That is, easier to pinpoint the 
reason to the problem.


Bye,

Patrick



On 2019-03-25 12:24, Richard Larsson wrote:

Hi Patrick,

Just some quick points.

Den sön 24 mars 2019 kl 10:29 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Hi Richard,

A great initiative. How errors are thrown can for sure be improved. We
are both lacking such checks (still to many cases where an assert shows
up instead on a proper error message), and they errors are probably
implemented inconsistently.

When it comes to use try/catch, I leave the discussion to others.


But I must bring up another aspect here, on what level to apply asserts
and errors. My view is that we have decided that such basic
functions as
planck should only contain asserts. For efficiency reasons.


Two things.

First, Oliver tested the speeds here.  The results are random in 
physics_funcs.cc:


number_density (100 million calls, averaged over 5 runs):

with assert:    0.484s
with try/catch: 0.502s, 3.8% slower than assert
no checks:      0.437s, 9.8% faster than assert

dinvplanckdI (20 million calls, averaged over 5 runs):

with assert:    0.576s
with try/catch: 0.563s, 2.3% faster than assert
no checks:      0.561s, 2.7% faster than assert

but with no notable differences.  (We are not spending any of our time 
in these functions really, so +-10% is nothing.)  One thing that asserts 
do that are nicer that they are completely gone when NDEBUG is set.  We 
might therefore want to wrap the deeper function-calls in something that 
removes these errors from the compilers view.  We have the 
DEBUG_ONLY-environments for that, but a negative temperature is not a 
debug-thing.  I suggested to Oliver we introduce a flag that allows us 
to remove some parts or all parts of the error-checking code on the 
behest of the user.  I do not know what to name said flag so the code is 
readable.  "IGNORABLE()" in ARTS and "-DIGNORE_ERRORS=1" in cmake to set 
the flag that everything in the previous parenthesis is not passed to 
the compiler?  This could be used to generate your 'faster' code but 
errors would just be completely ignored; of course, users would have to 
be warned that any OS error or memory error could still follow...


The second point I have is that I really do not see the points of the 
asserts at all.  Had they allowed the compiler to make guesses, that 
would be somewhat nice.  But in practice, they just barely indicate what 
the issues are by comparing some numbers or indices before terminating a 
program.  They don't offer any solutions, and they should really never 
ever occur.  I would simply ban them from use in ARTS, switch to throws, 
and allow the user to tell the compiler to allow building a properly 
non-debug-able version of ARTS where all errors are ignored as above.



For a pure forward model run, a negative frequency or temperature would
come from f_grid and t_field, respectively. We decided to introduce
special check methods, such as atmfields_checkedCalc, to e.g. catch
negative temperatures in input.


I think it would be better if we simply removed the *_checkedCalc 
functions entirely (as a demand for executing code; they are still good 
for sanity checkups).  I think they mess up the logic of many things.  
Agendas that work use these outputs when they don't need them, and the 
methods have to manually check the input anyways because you cannot 
allow segfaults.  It is not the agendas that need these checks.  It is 
the methods calling these agendas.  And they only need to checks for 
ensuring they have understood what they want to do.  And even if the 
checked value is positive when you reach a function, you cannot say in 
that met

Re: [arts-dev] About error checking

2019-03-24 Thread Patrick Eriksson

Hi Richard,

A great initiative. How errors are thrown can for sure be improved. We 
are both lacking such checks (still to many cases where an assert shows 
up instead on a proper error message), and they errors are probably 
implemented inconsistently.


When it comes to use try/catch, I leave the discussion to others.


But I must bring up another aspect here, on what level to apply asserts 
and errors. My view is that we have decided that such basic functions as 
planck should only contain asserts. For efficiency reasons.


For a pure forward model run, a negative frequency or temperature would 
come from f_grid and t_field, respectively. We decided to introduce 
special check methods, such as atmfields_checkedCalc, to e.g. catch 
negative temperatures in input.


When doing OEM, negative temperatures can pop up after each iteration 
and this should be checked. But not by planck, this should happen on a 
higher level.


A simple solution here is to include a call of atmfields_checkedCalc 
etc. in inversion_iterate_agenda. The drawback is that some data will be 
checked over and over again despite not being changed.


So it could be discussed if checks should be added to the OEM part. That 
data changed in an iteration, should be checked for unphysical values.



That is, I think there are more things to discuss than you bring up in 
your email. So don't start anything big before we have reached a common 
view here.


Bye,

Patrick


On 2019-03-22 16:34, Richard Larsson wrote:

Hi all,

I have kept running into problem with errors in ARTS produced by bad 
input for OEM.  Asserts are and not exceptions terminate the program in 
several cases.


I just made a small update to turn several errors affecting Zeeman code 
that before could yield assert-errors into try-catch to throw 
runtime_error().  This means I can catch the errors properly in a python 
try-except block.  The speed of the execution of the central parts of 
the code is unaffected in tests.  I need input from the ARTS developers 
if the way I did this is stylistically acceptable or not.


When updating these error handlers, I decided to use function-try-blocks 
instead of in-lined try-blocks.  I shared some code with Oliver, because 
of the errors above, and he suggested against using function-try-blocks 
and follow the traditional system of keeping all the error handling 
inside the main block.  However, he later in the conversation also 
agreed with me that it makes it much easier to pass errors upwards in 
ARTS from the lower functions if we use function-try-blocks since all 
the function calls of a function are then per automatic inside a 
try-catch block.  So we decided to run the stylistic question by everyone.


Please give me a comment on if this is OK stylistically or not in ARTS. 
I find the function-try-block cleaner since all the error-printing code 
is kept away, but if others disagree it just complicate matters.


The easiest demonstration of this change is in the updated 
src/physics.funcs.cc  file.  Please have a 
look at the two "planck()"-functions.  Both versions only throws (const 
char * e) errors themselves and turns them into std::runtime_error 
before re-throwing.  However, this means that the VectorView version of 
the function can see an error that is (const std::exception& e) because 
the catch-block of the Numeric planck() function turns it into one.  And 
since all errors in ARTS has to be runtime-errors for the user, it can 
also know that any upwards forwarding will deal with runtime-errors.


With hope,
//Richard

The src/physics_funcs.cc planck() error handling:

If the planck() Vector-function is sent a negative temperature, the 
error it produces will look as such:

Errors in calls by *planck* internal function:
Errors raised by *planck* internal function:
     Non-positive temperature

If the planck() Vector function is passed a frequency vector as [-1 -0.5 
0 0.5, 1], the error it produces will look as such:

Errors in calls by *planck* internal function:
Errors raised by *planck* internal function:
     Error: Non-positive frequency
     You have 3 frequency grid points that reports a non-positive frequency!

Ps.  To not have to search.

Function-try-block form:  void fun() try {} catch(...) {}

Inline form: void fun() {try{} catch(...) {}}

Same length of code.  Function-try-blocks do not have the variables 
before the throw-statement available for output, they have to be thrown 
to be seen.  However, you can perform most if not all computations you 
wish inside the catch-block.  Like the error counter I made for f-grid 
in the *planck* function's catch above.



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman

[arts-dev] Non-fractional grid positions

2019-03-15 Thread Patrick Eriksson

Hi Stefan, Oliver and other interested in this long email,

I just made a commit where I try to fix things to make retrievals using 
grids having length 1 possible. This is very handy in some cases, but 
causes problems with the interpolation inside arts. For the moment I 
have solved this by special functions, and this generated quite a bit of 
code.


But it could in fact be solved in a general manner. And then also fix 
other things.


For running OEM the issue of concern appears when mapping the x vector 
back to arts' variables. If we use 3D and a length-1 pressure retrieval 
grid as example, then we end up with a situation that we want to 
interpolate a Tensor3 having size X(1,nlat,nlon)

X can be a scaling factor for H2O as indicated in my ChangeLog message

In the calculation of the Jacobian, I use the grid_pos system to map 
data from line-of-sights to the retrieval grids, and I once made some 
function that gives a correct grid position for length-1 retrieval 
grids. That is, the grid position is set to

0 0 1
It works in fact to calculate interpolation weights for this case.

On the other hand, you can not apply the interp functions, as these 
functions always include idx+1, even if fd[0]=0. That is, there is a 
blind summation for the values inside the grid range, even if some 
values will get weight 0. My new functions could be removed if the 
interp functions noticed when it is unnecessary to involve idx+1.


This would also solve:

Now it is not possible to set the grid position to the end grid point. 
That is,


n-1 0 1

as idx+1 is then an invalid index. This generated extra code in the 
ppath part. For basically all observations there is a ppath point 
exactly at the uppermost pressure level.


Quite a lot of our interpolations could be avoided. As retrieval grids 
normally are sub-sets of the forward model grids (and is the recommended 
choice for numerical accuracy), there is a lot of interpolation that in 
fact is not interpolation (i.e. points of new and old grid are the 
same). And is this not also the case for interpolation of absorption 
lookup tables? The normal case should be that the pressure grid of the 
lookup table equals p_grid. As we here apply polynomial interpolation, a 
lot of calculations could be saved by identifying when fd[0]=0. (or is 
there special code for abs_lookup?)



The question is how to add this without making the general interpolation 
slower. I assume that a boolean in the grid_pos structure could help, 
that is true if fd[0] can be treated to be exactly zero. If we call this 
new field nonfrac, the green 2D interpolation could like this


tia = 0;
for ( Index r=0; r(cf. interpolation.cc line 2580. Note that I had to add some temporary 
asserts to make sure that my special code is OK.)


This has the drawback that r*2+c is more costly than ++iti (as in the 
present solution), but we could still gain on average. Another option 
would be that a similar thing is done when calculating the interpolation 
weights, i.e. the number of weights is smaller when nonfrac is true. The 
weights should be ordered in such way that ++iti will work. And we 
should then really gain on average!?



And now realize that this bool would have saved me a lot of headache in 
the ppath_step part. There are a lot of checks if fd[0] is effectively 
zero (which it is for the end points of each step). Introducing the flag 
would allow me to clean up this.



What do you say? Comments? Or an even better solution? If you don't 
follow, ask or let's discuss next Friday.


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Documentation request: jacobianAdd* and retrievalAdd* functions

2019-01-21 Thread Patrick Eriksson

Hi Richard,

It's nice that you are trying to use these methods. As far as I know, 
you are the first using the methods not using Qpack.


I will have a look at the jacobianAdd methods and try to explain more 
clearly how many x-elements that are generated by each method.


It seems reasonable that you should get information on involved sizes if 
Jacobian and Sx parts do not match. I assume Simon will look at it (but 
he is taking a course this week, working on a ESA study, ..., so it can 
take some time)


Bye,

Patrick



On 2019-01-21 15:07, Richard Larsson wrote:

Hi,

(This is mainly a question to Simon and Patrick but the dev-list exists 
so I am using it.)


I have been trying out the retrievalAdd* functions for the systems we 
have in Gottingen.  One of the most difficult bit is to figure out is 
how to complete the retrieval setups without running loops around the 
errors being reported.  I might be a complete idiot about this, but the 
documentation and error reporting by ARTS seems far from good here.


I have identified two problems:

1)  The covmat-block size requested by the add-functions are not 
reported in the documentation of said functions.


2)  The error when either retrievalDefClose or the individual 
retrievalAdd* functions fail is not detailed enough to even hint at the 
problem, it simply states that the covmat has the wrong size.


I have suggestions below for how I would fix it if I knew the functions 
well enough.  Ignore these if you want to, but please try to address the 
poor documentation and error somehow.


For the first, each individual retrievalAdd* function would have to be 
addressed.  Some examples of problematic functions: jacobianAddFreqShift 
reports it may be "constant in time", which means covmat _block is 
1-by-1, and jacobianAddSinefit reports "one sine and one cosine term" 
per period-length, or a 2-by-2 uncorrelated covmat_block for every 
period length.  These also sound like reasonable sizes, given that they 
both are just used as baseline-fit for sensor phenomenon (so there is no 
p_grid dependency).  However, of course they fail when you use these 
covmat-block-sizes.  This means there is an error in the method 
documentation.  To fix this, I suggest the increase in size of the 
Jacobian matrix is written clearly in each of the jacobianAdd* 
descriptions.  The same apply for their retrievalAdd* cousins, where the 
size of the covmat-block should be spelled out.


The second point seems even easier to address.  If the internal check 
fails, please report how.  If I see: "I was expecting the Jacobian 
matrix to be 4001 x 510 and the covariance matrix to be 510 X 510.  
Instead, the covariance matrix is 498 X 498", this means that I can 
begin to guess at the error.  Presently, the somewhat nonsensical 
"Matrix in covmat_block is inconsistent with the retrieval grids" is 
used instead, which does not help identify the cause of the problem at all.


With hope,
//Richard

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Change to temperature Jacobians

2018-11-19 Thread Patrick Eriksson

Hi Stuart,

Richard and I have now worked on iyEmissionStandard, and the temperature 
Jacobian produced by this method now considers if effects due to 
hydrostatic equilibrium shall be added or not. We have compared to 
perturbation calculations (though so far only for stokes_dim=1) and all 
look OK. But we have not compared to pre version 983.


Your control file should now hopefully work. If not, tell us. If you 
compare to pre-983, please inform us about the outcome.


Sorry for this mishap in the development. I have now created a test that 
will catch this in the future. And I will try to add even more tests.


Bye,

Patrick



On 2018-11-15 17:08, Fox, Stuart wrote:

Hi all,

I have just noticed that there has been relatively large changes to some 
calculated temperature Jacobians with “recent” versions of ARTS. The 
differences appeared with arts-2-3-983. Attached is an example set of 
controlfiles/data that will show the differences when run with versions 
pre- and post-983. Is this something specific that I’ve done wrong in 
the set-up of the Jacobians, or is it something that was meant to 
happen, and if so then why?


Cheers,

Stuart


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] freqShift in xaStandard and x2artsStandard

2018-09-29 Thread Patrick Eriksson

Hi Jonas, Hi Simon,

@Jonas: This is just a test case, but I want to mention that you should 
not expect accurate results using it. The f_grid has a general spacing 
of 400kHz around the line center and that's too coarse for good results. 
Maybe clear for you, but a comment just in case ...



1) If the frequency grid f_grid is very narrow at certain points (~< 100 
kHz, intentionally or by mistake with VectorInsertGridPoints) the 
sensor.cc throws an error:


Indeed, this made the test to fail. Going to 102 points the f_grid has a 
spacing of just 40 Hz between some points and I then assumed that there 
was some numerical issue. This is probably a set up that shall be 
avoided, but it seems that ARTS handles it. In the end the fix turned 
out to be this (in inversion_iterate_agenda):


  atmfields_checkedCalc( negative_vmr_ok = 1 )

That is, the change in f_grid happen to generate some negative VMR in 
the first iteration. In inversions you could need accept this.


By the way, the test case works with LM iteration even without allowing 
negative VMR. That should prove that ARTS handles the case in acceptable 
manner. That said, I have not checked the final result. Feel free to do 
that.



@Simon: It took me some time to track this down as the error happened 
inside inversion_iterate_agenda. OEM does not display that error and it 
is then very difficult to understand what's going on. This must be fixed 
in some way. Can't OEM just simply display the error message?


Or rather just throw the error. It is not useful to continue if you have 
an error of this type. In this case, you instead get an error from 
another method that is just distracting.


(We can discuss when we meet, if needed)

Bye,

Patrick


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] freqShift in xaStandard and x2artsStandard

2018-09-24 Thread Patrick Eriksson

Hi all, Hi Jonas, Hi Richard,

All: If you do retrievals and using most recent version, you need to do 
some small changes. The benefit is that you now also can retrieve 
frequency corrections. See the ChangeLog message for a summary.



Jonas: With the commit I just made it should be possible to retrieve a 
frequency shift using ARTS's OEM. The demo cfile should be pretty close 
to what you want to do. That is, for details see:


controlfiles/artscomponents/oem/TestOEM.arts

I have done some testing, and all looked OK. Tell me if it works for you 
or not.



Richard: I also made a small preparation if we ever will test to 
retrieve spectroscopic variables. The mapping from the x-vector to 
spectroscopic data is planned to be handled by x2artsSpectrosocopy (so 
far a dummy WSM)


Bye,

Patrick


On 2018-09-18 15:18, Jonas Hagen wrote:

Dear Patrick

Thanks, of course this would be the proper approach and I hope that you 
decide to implement instrument variables soon! This is the last piece in 
the way of operational WIRA-C Wind retrievals with ARTS (and thus 
without Matlab). Until then I will tinker with my patch.


Best regards,
Jonas


On 18.09.2018 10:27, Patrick Eriksson wrote:

Dear Jonas,

Some quick feedback, I am on a conference.

The variables that you can retrieve using ARTS-OEM so far are mainly 
atmospheric quantities. To handle instrument variables I maonly left 
for the future.


A constant frequency switch could be handled by shifting the 
transitions as you tried to do. But frequency shifts are in fact an 
instrument parameter, and moving the transitions will not work for 
frequency stretch. So it seems to be time to implement a general way 
to handle instrument variables. I will discuss with Simon, that is 
also here at the conference.


For the moment I suggest that you do repeated linear inversions. And 
adjusting your instrument frequencies after each linear inversion. 
This should work with your extension of xaStandard. After some 
iterations turn off the frequency switch and make a final inversion.


Kind regards,

Patrick



On 2018-09-17 17:22, Jonas Hagen wrote:

Hello ARTS Developers,

I'm trying to retrieve the Frequnecy Shift along with Wind with the 
ARTS internal retrieval. To my understanding, this should work, but 
support in xaStandard and x2artsStandard WSMs is missing and results 
in an error: "Found a retrieval quantity that is not yet handled by 
internal retrievals: Frequency"


For xaStandard(), the a priori Frequency Shift could easily be set to 
zero after line 793 of m_oem.cc along with baseline and pointing.
For x2artsStandard(), maybe a new WSV would make sense (f_shift) and 
the inversion_iterate_agenda would then call 
abs_linesShiftFrequency(f_shift), similar to the baseline stuff?
I tried to implement it myself but got stuck with jacobian_quantities 
and indices in x2artsStandard().


Best regards,
Jonas Hagen
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] freqShift in xaStandard and x2artsStandard

2018-09-18 Thread Patrick Eriksson

Dear Jonas,

Some quick feedback, I am on a conference.

The variables that you can retrieve using ARTS-OEM so far are mainly 
atmospheric quantities. To handle instrument variables I maonly left for 
the future.


A constant frequency switch could be handled by shifting the transitions 
as you tried to do. But frequency shifts are in fact an instrument 
parameter, and moving the transitions will not work for frequency 
stretch. So it seems to be time to implement a general way to handle 
instrument variables. I will discuss with Simon, that is also here at 
the conference.


For the moment I suggest that you do repeated linear inversions. And 
adjusting your instrument frequencies after each linear inversion. This 
should work with your extension of xaStandard. After some iterations 
turn off the frequency switch and make a final inversion.


Kind regards,

Patrick



On 2018-09-17 17:22, Jonas Hagen wrote:

Hello ARTS Developers,

I'm trying to retrieve the Frequnecy Shift along with Wind with the ARTS 
internal retrieval. To my understanding, this should work, but support 
in xaStandard and x2artsStandard WSMs is missing and results in an 
error: "Found a retrieval quantity that is not yet handled by internal 
retrievals: Frequency"


For xaStandard(), the a priori Frequency Shift could easily be set to 
zero after line 793 of m_oem.cc along with baseline and pointing.
For x2artsStandard(), maybe a new WSV would make sense (f_shift) and the 
inversion_iterate_agenda would then call 
abs_linesShiftFrequency(f_shift), similar to the baseline stuff?
I tried to implement it myself but got stuck with jacobian_quantities 
and indices in x2artsStandard().


Best regards,
Jonas Hagen
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] [arts-users] Question about using single scattering database in ARTS

2018-09-06 Thread Patrick Eriksson

Hi,

The reason to the error is that the method ScatSpeciesScatAndMetaRead 
expects the files to hold data of type SingleScatteringData, while the 
standard habits are stored as ArrayOfSingleScatteringData (to get all 
data into a single file).


I am not using ScatSpeciesScatAndMetaRead myself, and I did not think 
about this. I think the solution is to make a new method reading a full 
scattering species from a single file. Or rather maybe the existing 
method shall be renamed to


ScatSpeciesScatAndMetaReadFiles

and ScatSpeciesScatAndMetaRead shall be the one reading a full 
scattering species (and thus take String as input, not an ArrayOfString).


Problem is that I don't have time for this in some time. Any volunteer?

Bye,

Patrick




On 2018-09-06 10:08, wwy wrote:

Dear All,

An error occur when I want to use the xml-files in StandardHabits in 
ARTS(version-2.3.897)

I use the method as :
ScatSpeciesScatAndMetaRead(scat_data_files=["LiquidSphere.xml"])

XML parse error: Tag  expected but  found.
Check syntax of XML file

When I use the xml-file in testdata, it can be done. So how can I use 
the files from single scattering database ?

Thanks
Regards



___
arts_users.mi mailing list
arts_users...@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_users.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] LIDORT and VLIDORT

2018-07-04 Thread Patrick Eriksson

Hi Stefan, Hi all,

Thanks for the hint. Yes, we have not discussed these scattering 
solvers, but I did some googling about them just some weeks ago. I heard 
about these solvers at the ECMWF workshop, and got curios.


Yes, the Jacobian feature sounds interesting. But some possible 
limitations to consider:


* This is Fortran 90 code. Is that OK for us? Anyhow, Fortran works 
poorly with threading inside ARTS.


* In the paper you sent it says: we restrict ourselves to scattering for 
a medium that is ‘‘macroscopically isotropic and symmetric’’. Has this 
restriction been removed? I never manged to sort this out.


* Would we be free to redistribute the code?

Can your contact answer the last two questions? Or does someone else out 
there know?


Bye,

Patrick




On 2018-07-03 10:10, Stefan Buehler wrote:

Dear Patrick, dear ARTS developers,

I recently got an enthusiastic recommendation for a solver that we have so far 
not talked about: LIDORT and its polarised version VLIDORT by Robert Spurr.

https://link.springer.com/chapter/10.1007/978-3-540-48546-9_7

If I understood correctly, it is based on DISORT, but does provide Jacobians. 
Attached is a paper I found.

All the best,

Stefan


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] 回复: Fwd: ARTS user

2018-07-02 Thread Patrick Eriksson

Dear Alfred,

I'm glad to learn that the development version can handle the radar 
observation. But, I want to know what this MC module can simulate, the 
reflection coefficient of cloud or the thermodynamic radiation?


The MC module is called MCRadar.

I am not sure about what you mean with "thermodynamic radiation", but 
MCRadar is intended to mimic radar measurements, including multiple 
scattering, attenuation and antenna pattern. So I am assume it returns 
what you are looking for.


For more details contact: Adams, Ian 

The module restricted to single scattering is called iyActiveSingleScat.


As I mentioned before, I ?0?2did some changes to the stable version 2.2.64, 
added a radar transmitter and got the simulated reflection coefficient 
data. However, there ?0?2is no benchmark result for me to verify the 
validity. If the development version can simulate reflection then I can 
compare those two results.


This was in fact my thinking, but not clearly expressed in my email. Why 
not start with light rain and compare to iyActiveSingleScat (as this 
method is very fast), and if all OK continue with MCRadar and cases with 
more strong scattering.


We have not made any extensive comparisons, but in the tests I have done 
iyActiveSingleScat and MCRadar agree as long multiple scattering can be 
ignored. It seems though that MCRadar has a bias just above the surface, 
but not critical as this is inside the clutter zone.


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Fwd: ARTS user

2018-06-30 Thread Patrick Eriksson
Dear Alfred,

A short answer as I am on vacation. It seems that you should have contacted us 
earlier. If I understand what you wrote correctly, the features you discuss are 
at hand in the 2.3 development version. We have now both a single scattering 
and a MC module for simulating radar observations.

So why not take a look at the latest version and see if it meets your demands. 
The main persons to contact are Ian Adams and me.

Bye,

Patrick

Stefan Buehler  skrev: (30 juni 2018 11:55:41 
CEST)
>Dear all,
>
>is anyone familiar with this?
>
>Best wishes,
>
>Stefan
>
>> Begin forwarded message:
>> 
>> From: "良亮" 
>> Subject: 回复: ARTS user
>> Date: 29. June 2018 at 13:15:48 CEST
>> To: "stefan.buehler" 
>> 
>> Dear Mr. Buehler
>> Thank you for your reply.
>> 
>> Yes, i want a benchmark result which simulate the reflection of rain
>cloud or whatever that ARTS can handle. For passive radiation, the 
>widely-used example is raining cloud box at 19.4 GHz(see: Microwave
>radiative transfer intercomparison study for 3-D dichroic media,2006),
>which i get the similar result by using modified program. My
>application is to simulation active radiation ,but hard to find a
>example like above.
>> 
>> I use the latest version 2.2.64. I change the coordinate system to
>rectangular coordinate and use Monte Carlo function directly. Changing
>ARTS from linux to windows is not so hard. Only some c++ head file and
>file addressing is different.
>> 
>> Yours sincerely
>> Alfred xu
>> 
>> 
>> -- 原始邮件 --
>> 发件人: "stefan.buehler";
>> 发送时间: 2018年6月29日(星期五) 下午5:47
>> 收件人: "良亮";
>> 主题: Re: ARTS user
>> 
>> Dear Alfred,
>> 
>> nice to hear that you are using ARTS.
>> 
>> Is it specific data from a specific paper that you want?
>> 
>> Which version of ARTS have you used? Do you think your modifications
>may be useful for other (Windows) users?
>> 
>> Best wishes,
>> 
>> Stefan
>> 
>> > On 29. Jun 2018, at 09:44, 良亮  wrote:
>> > 
>> > Dear Mr. Buehler
>> >
>> >I'm a student from South-east university in china. I have
>studied ARTS for several months and done some changes to this software
>package. I managed to run ARTS under Windows and added a transmitter as
>microwave source to simulate the reflection. However, I found it is
>difficult to prove the validity of reflective simulation because of the
>lack of example and data from paper. Could you help me where can i get
>the data?
>> > 
>> > Yours sincerely
>> > Alfred xu
>> 
>> 

--
Sent from my mobile by K-9 Mail.___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Changes around iy-methods and aux variables

2018-03-13 Thread Patrick Eriksson

Hi all,

I am about to make a commit with some major changes. Richard, Jana and I 
have since the ARTS workshop worked on streamlining the radiative 
transfer methods, with the aim of having a coherent internal system 
covering both non-LTE and scattering, as well as simplifying and 
extending the calculation of analytical Jacobians. On the same time, 
some changes around ppath and auxiliary data have been introduced.


The methods iyEmissionStandard, iyTransmissionStandard and 
iyActiveSingleScat are largely rewritten. Most of the changes are just 
seen on the inside. There could be some small impact on the main 
outputs, iy and the resulting Jacobian. This due to slightly changed 
approach on how variables are averaged over a ppath step. But the 
differences we have seen so far all have been negligible.


On the other hand, a main change is that ppath is now calculated outside 
of the iy-methods. This for increased flexibility, and make it possible 
to avoid recalculating the ppath for e.g. iterative OEM inversions. This 
means that you need to call ppathCalc inside or outside of 
iy_main_agenda (if outside, you need Touch(ppath) inside the agenda). 
Note that the definitions in agendas.arts are updated and likely you 
don't need to care about this.


The treatment of "aux-variables" has also changed. Various atmospheric 
and radiative properties along the ppath, are now exported as 
pre-defined WSVs. These variables are denoted as ppvar_something (ppvar 
= propagation path variable). One example is ppvar_t, that gives you the 
temperature at each point of the ppath.


iy_aux_vars is now reduced to only treat variables that can be expressed 
as a matrix, as iy itself, such as (total) optical depth. As a 
consequence, iy_aux is now an ArrayOfMatrix. And all iy_ayx_vars can now 
be passed on to y_aux. However, I have not got time to resurrect all 
iy_aux_vars (such as Faraday rotation), and little testing done. So 
don't expect to much of this part yet.


Some documentation updated, but a lot to on that side ...


We think these changes will make it easier to use and develop ARTS in 
the long run, but could cause some problems now. Tell us if you obtain 
any suspicious results, or if any feature you use has disappeared.


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Transit radius

2018-03-09 Thread Patrick Eriksson

Stefan,

A quick answer.


Is there a smarter way?


Not inside ARTS itself.



And, for the way I outlined, what is the currently recommended way to get out 
tangent altitude and opacity along the los?


Note that you don't need to make a Tb calculation, you can calculate the 
transmission directly by iyTransmissionStandard. A bit quicker and you 
can include particles.


The method TangentPointExtract extracts the tangent point data. First 
element of the vector returned is the tangent altitude (or is it still 
radius in v2.2?).


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] iyFOS and iyRadioLink

2018-01-29 Thread Patrick Eriksson

Hi all,

Since some time we have been working on unification inside the radiative 
transfer methods and handling of scattering data. This has resulted in 
double versions of many internal methods and functions.


In order to make it a bit easier to reach a point where we can start 
removing old things, I decided to deactivate two methods


iyFOS
iyRadioLink

I did that on the assumption that these methods are not used actively by 
anyone. Tell me if I am wrong.


The plan is to resurrect these methods later, when we are ready with 
revision of the core parts.


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] RT4 and auto-increasing number of streams - a cautionary note

2018-01-25 Thread Patrick Eriksson

Hi Victoria,

Some comments, just in case ...

My question just referred to if the radiation field gets more flat or 
more varying (as a function of zenith angle) with increased scattering. 
And thus if the issue if angular interpolation gets more serious or not. 
The overall calculation accuracy should just increase with the number of 
streams.


At this moment I leave to Jana to comment on checks and warnings 
associated with the scattering data.


Bye,

Patrick



On 01/24/18 20:00, Victoria Sol Galligani wrote:

Hello everyone! Hi Jana!

I was aware of what you are saying about the number of streams. However 
 in the context of the ARTS scattering methods + RTTOV-scatt 
inter-comparison I am working on, I have been testing the sensitivity of 
the output TBs to the number of streams chosen. In this regard, I think 
it would be interesting to run RT4 even if the interpolation induces 
large errors and the scattering matrix is not resolved well. If I run 
RT4 without this warning (actually error because arts doesn't run with 
its current checks) I could easily answer your answer Patrick (if the 
angle interpolation error increases with strong scattering), because I'm 
running with different profiles, some with much more scattering than 
others. At the moment its clear to me that more streams are needed for 
more scattering cases, so I guess that already answers your question 
Patrick ...


Do you have any advice regarding turning this warning/error off on my 
arts distribution?


Looking forward to sharing my preliminary results on this subject,
Hugs,

Victoria


On Mon, Jan 22, 2018 at 4:56 PM, Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>> wrote:


Hi Jana,

I was aware of that the auto-increase does not change the number of
output angles, but still thanks for the warning.

That is, for me it was understood that you should check that the
start number of streams is high enough, to make sure that angle
interpolation does not induce too large errors. But not clear to me
is if the angle interpolation error increases with strong
scattering. This is something that I have never tested.

Your comments seem to indicate that it is the case (i.e. increasing
errors), but has you tested it? Without that it is a bit hard to
decide what to do.

Anyhow, my recommendation is that you focus on general cleaning and
documentation. Leave any possible extension of RT4 on this point to
us others, if we find it necessary.

Bye,

Patrick





On 2018-01-22 17:46, Jana Mendrok wrote:

Hi,

With some of you we have discussed (even suggested) the use of
the auto-increase number of streams feature of the RT4 interface.
The background to this feature is that RT4 needs the scattering
matrix to be properly resolved in order to conserve the
scattered energy satisfactorily. Using the feature, the number
of streams is internally(!) increased until the scattering
matrix resolution is deemed sufficient.

The crucial issue, i just got reminded of when i went through
the code, is that this increase is only done internally. the
output will remain on the original number of streams set!

That means, *one should not start with a very low number of
streams* and should not let the system completely self-adapt
(as, strictly speaking, that's not what it is doing - the output
dimensions won't adapt and will always remain the
original/starting number of streams) (i'm going to add that info
to the online doc).

It should be kept in mind, that - unlike Disort - neither RT4
nor ARTS itself have good interpolation options for "off-stream"
angles, i.e. the number of streams RT4 is set up with does not
only determine the RT4 solution accuracy (this is
improved/adapted with the auto-increase feature), but also the
number of output directions (not affected by auto-increase),
hence the accuracy with which the field is known to and further
applied within other WSM of ARTS.

Best wishes,
Jana


Ps.  Something more for developers...

Thinking about that, this seems quite inconvenient. So, the
question what to do about it, how to change that. Two possible
solutions pop into my head:

(1) instead of interpolating the high-stream-solution to the low
number of streams, we could interpolate everything to the
highest number of streams and output the "high-resolution" field.

(2) re-define (doit_)i_field from Tensor7 into a ArrayOfTensor6
with one Tensor6 entry per freq. this would allow to have
different angular dimensions per frequency (we'd need to also
store the angles per each frequency). however, that would affect
the out

Re: [arts-dev] RT4 and auto-increasing number of streams - a cautionary note

2018-01-22 Thread Patrick Eriksson

Hi Jana,

I was aware of that the auto-increase does not change the number of 
output angles, but still thanks for the warning.


That is, for me it was understood that you should check that the start 
number of streams is high enough, to make sure that angle interpolation 
does not induce too large errors. But not clear to me is if the angle 
interpolation error increases with strong scattering. This is something 
that I have never tested.


Your comments seem to indicate that it is the case (i.e. increasing 
errors), but has you tested it? Without that it is a bit hard to decide 
what to do.


Anyhow, my recommendation is that you focus on general cleaning and 
documentation. Leave any possible extension of RT4 on this point to us 
others, if we find it necessary.


Bye,

Patrick





On 2018-01-22 17:46, Jana Mendrok wrote:

Hi,

With some of you we have discussed (even suggested) the use of the 
auto-increase number of streams feature of the RT4 interface.
The background to this feature is that RT4 needs the scattering matrix 
to be properly resolved in order to conserve the scattered energy 
satisfactorily. Using the feature, the number of streams is 
internally(!) increased until the scattering matrix resolution is deemed 
sufficient.


The crucial issue, i just got reminded of when i went through the code, 
is that this increase is only done internally. the output will remain on 
the original number of streams set!


That means, *one should not start with a very low number of streams* and 
should not let the system completely self-adapt (as, strictly speaking, 
that's not what it is doing - the output dimensions won't adapt and will 
always remain the original/starting number of streams) (i'm going to add 
that info to the online doc).


It should be kept in mind, that - unlike Disort - neither RT4 nor ARTS 
itself have good interpolation options for "off-stream" angles, i.e. the 
number of streams RT4 is set up with does not only determine the RT4 
solution accuracy (this is improved/adapted with the auto-increase 
feature), but also the number of output directions (not affected by 
auto-increase), hence the accuracy with which the field is known to and 
further applied within other WSM of ARTS.


Best wishes,
Jana


Ps.  Something more for developers...

Thinking about that, this seems quite inconvenient. So, the question 
what to do about it, how to change that. Two possible solutions pop into 
my head:


(1) instead of interpolating the high-stream-solution to the low number 
of streams, we could interpolate everything to the highest number of 
streams and output the "high-resolution" field.


(2) re-define (doit_)i_field from Tensor7 into a ArrayOfTensor6 with one 
Tensor6 entry per freq. this would allow to have different angular 
dimensions per frequency (we'd need to also store the angles per each 
frequency). however, that would affect the output of other solvers, too, 
and the way (doit_)i_field is applied in (i)yCalc.


so, option (1) seems less of a hassle.

neither option will solve all issues (like, even if the radiation field 
is quite smooth, linear interpolation from low-resolution fields won't 
be very good and higher-order interpolation intrinsically requires, 
well, higher numbers of streams), but at least some (like better 
conserving the shape of the radiation field, where derived from higher 
number of streams).


any opinions? do you consider this an issue at all, or is a cautionary 
note in the documentation enough? if an issue, any better ideas on 
solutions or opinions on "my" two options?


and, anyone other than me willing to implement possible changes?


--
=
Jana Mendrok, Ph.D.

Email: jana.mend...@gmail.com 
=

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Retrieval units and transformations

2017-12-04 Thread Patrick Eriksson

Hi Richard,

About the current setup, it is indeed a heritage from the past.  I just 
copied your setup.  You had all unit conversion local.


Yes, "mea culpa".


I like the idea to allow more units but I think this should be handled 
at a later stage than the iy-functions.


I have argued before that it would be a very good idea to ensure that 
all output from iy-functions are purely dependent on the physics of the 
underlying calls --- that is on the inputs to the iy-functions.  All 
additional conversions should happen in y-functions that have been 
designed to fit with Rodgers-like computations.


In principle I agree that that would be the cleanest solution. I did 
consider this but ruled it out due to practical issues. Maybe I should 
have mentioned that in the first email, but was afraid that it would 
cause distraction from the main topics.


I think the unit conversion are best done inside the iy-methods for 
several reasons:


* The conversion requires access to a number of variables and that is 
most easily guaranteed inside iy-methods. Even if we would make all 
those quantities ppvar-ones, we still have the next issue.


* Outside of iy-methods the Jacobian values are on individual grids, and 
we then need to interpolate other quantities to make the conversion, 
that is negative for numerical accuracy. In addition, the resolution 
along the path should in general be much better than the one implied by 
the retrieval grids. For this reason, I think it is best to do the 
conversion for Jacobian values along the ppath (i.e. convert diy_dpath, 
and not diy_dx)


* The last point is especially important for quantities that create 
dependencies between Jacobians. For example, if temperature and water 
are retrieved, the temperature Jacobian depends on if vmr or RH is used 
for humdity. (I think temperature + species ND retrievals cause an error 
now, as this type of dependence not yet is handled.)



This would still force the conversion of wind-field to m/s instead of Hz 
after propmat^, but it would remove VMR, HSE, iy-conversion and 
jacobian-transformation from this code placing it at a later stage.  (If 
you want HSE, then this and all that it depends on should be input as a 
variable so that the derivatives can be computed based on 'actual' 
theory instead of understood theory.  You know the problem the latter 
has caused because of the switch in how layers work in iy***2 functions.)


To convert wind Jacobians to m/s outside of iy, does that not be in 
contradiction to your own overall preference? Anyhow, if that would make 
the coding easier, I would not mind to put that among other unit 
conversions.


I can not judge right now if HSE can be incorporated as post-processing. 
Seems that you started to work on HSE, so you can judge this better at 
this point.


I am on a workshop this week, so no time for digging into details or 
starting any ARTS coding right now. First step is to agree on the 
overall plan. My suggestion is to make unit conversions towards the end 
of iy-methods (not outside for reasons discussed above). OK?


Bye,

Patrick



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Retrieval units and transformations

2017-12-03 Thread Patrick Eriksson

Hi Simon, Richard and all,

I started to think about how to allow a log10 retrieval "unit" for 
scattering quantities. As often happens, this ended with that I want to 
make a general cleaning and reorganisation.


My idea is to move towards a more clear distinction between unit and 
transformations. In addition, we have to deal with two types of 
transformations, linear and non-linear. I think these three shall be 
applied in the order: unit, non-linear and linear.


Comments:

unit: This would now just be pure unit changes, such as "nd" (number 
density). Later I would also like to allow relative and specific 
humidity for H2O. We could also allow "C" for temperature ...


(Units changes will be specific for each retrieval quantity, while 
transformations shoul be implemented in a general manner.)


Non-lin transformations: I would like to remove the "logrel" option (now 
an unit option). And instead generally introduce "log" and log10" 
(without ruling out to add more transformations, such as tanh+1?)


Linear transformations: As already introduced by Simon.

The unit part will be handled by the iy-methods. For the transformations 
I suggest to extend the scope of present jacobianTransform (as well as 
merging it with jacobianAdjustAfterIteration, that handles a rescaling 
for "rel" necessary for iterative OEM). 




All: Comments? Something that I have missed?


Richard: The handling of units seems a bit messy to me. The function 
dxdvmrscf is applied in get_ppath_pmat_and_tmat, but only if 
from_propmat. dxdvmrscf is also applied in 
adapt_stepwise_partial_derivatives. This confuses me, but could be an 
heritage of my old code.
Would it not be simpler if the core code just operates with ARTS default 
unit, i.e. vmr for abs species? And then the conversion is done only on 
the final jacobian values (along the ppath). This should be a general 
function, called towards the end of all iy-methods providing jacobians.
As far as I can see that should work, and should give cleaner code. 
Agree? Or have I missed something?


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Concept for Radiative Fluxes and Heating Rates in ARTS

2017-11-30 Thread Patrick Eriksson

Hi Freddy, Hi all,

A great idea to brake down the calculations into more steps than we 
discussed in Hamburg. A general OK to your plans, but I think we need to 
establish what terminology to use in ARTS before discussing the details. 
I got dizzy when reading your plan ...


In ARTS we call spectral radiance for intensity, i for short. 
"Intensity" is a vague name, but it would be a big thing to change this 
nomenclature now. But I think we should use more clear names when adding 
new WSVs. On the same time, we have discussed to rename doit_i_field, as 
well as having i_field for both total atmosphere and cloudbox.


I have quickly tried to make a naming suggestion of main WSVs, found 
below. (It seems we both have looked at Wikipedia). I picked flux 
density (fluxd) in favor for irradiance to have a more distinct 
difference to radiance. Can be discussed.


I am not clear about where we need to keep upward and downward streams 
separated in these new WSVs. So I am not sure about the exact tensor 
dimensions needed yet. For example, it seems that you have a directional 
dimension for heating rates. That I don't get? By the way, what unit 
shall we use for heating rates? SI should be K/s!?


Regarding "IfieldFromIycalc1DPP". I have long planned to make a 
ppath1DPP. Would fit well here. The new iyEmissionStandard will make 
this very easy. You will just need to do calculations at TOA and 
surface. (In principle only TOA could suffice with a limitation to 
specular surfaces, but I think it is better to be general despite a bit 
slower)


All written very quickly. Let's discuss during the video con later today.

Bye,

Patrick


Name: Spectral radiance
Unit: W/(m2 Hz sr)
ARTS: i_field [Tensor 7]
ARTS: cloudbox_i_field [Tensor 7]

Name: Radiance
Unit: W/(m2 sr)
ARTS: radiance_field [Tensor 6]

Name: Spectral irradiance
Name: Spectral flux density
Unit: W/(m2 Hz)
ARTS: spectral_fluxd_field [Tensor 5]

Name: Irradiance
Name: Flux density
Unit: W/m2
ARTS: fluxd_field [Tensor 4]

Name: Heating rate
Unit: ?
ARTS: heating rate [Tensor 3]

Angular grids:
field_za_grid
field_aa_grid
cloudbox_field_za_grid
cloudbox_field_aa_grid





On 2017-11-30 14:35, Manfred Brath wrote:

Hello all,*

*I plan to implement functions in ARTS to calculate monochromatic 
(spectral) radiatiative fluxes also called as monochromatic (spectral) 
irradiance, radiatiative fluxes also called irradiance, radiance and 
heating rates from the radiation field. Any comments, suggestions are 
welcome.
For that purpose I would like to implement five new workspace methods, 
which will be explained below.



  RFAngularGridsSet

This method will be similar to DOAngularGridsSet and set up the angular 
grids for the flux calculation but it also calculate the integration 
weights for the zenith angle integration. (Maybe this function can be 
included in a revised version of DOAngularGridsSet)


Input:

n_za
Number of grid points in zenith direction per hemisphere (Index)
n_aa
Number of grid points in azimuth direction per hemisphere (Index,
default=1)
gridtype_az
Defines the type of azimuth grid (string):

  * double_gauss, double gauss in μ =cos θ
  * linear_mu, linear in μ =cos θ
  * linear, linear in θ

Output:

doit_za_grid_size
Number of equidistant grid points of the zenith angle grid, defined
from 0° to 180°, for the scattering integral calculation.
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
za_grid_weights
integration weights for zenith angle integration (Vector, new
workspace variable///
1
Italics indicate new workspace variables
/)


  IrradianceFromIfield

This method will calculate the monochromatic (spectral) irradiance and 
irradiance (radiatiative fluxes). Important, this function will only use 
the first Stokes component of the doit_i_field and iy_unit must be “1”.


Input:

doit_i_field
Radiation field (Tensor7)
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
f_grid
Frequency grid (Vector)
Output:

sir_field
spectral irradiance (radiative flux) (Tensor5, new workspace
variable)) [Wm -2 Hz -1 ]. Size: [Nf, size(doit_i_field, dim=1) ,
size(doit_i_field, dim=2), size(doit_i_field, dim=3), 2 ], last
dimension is upward and downward direction.
ir_field
//irradiance (radiative flux) (Tensor4, new workspace variable)) [Wm
-2 ]. Size: [size(doit_i_field, dim=1) , size(doit_i_field, dim=2),
size(doit_i_field, dim=3), 2 ]


  RadianceFromIfield

This method will calculate the radiance and irradiance (radiatiative 
fluxes). Important, this function will only use the first Stokes 
component of the doit_i_field and the iy_unitmust be “1”.


Input:

doit_i_field
Radiation field (Tensor7)
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
f_grid
Frequency grid (Vector)
Output:

r_field
radiance (Tensor

[arts-dev] Smaller changes around PSDs

2017-10-19 Thread Patrick Eriksson

Hi all,

This is only relevant if you are using particle_bulkprop_field and PSDs.

Some comments as I have made a commit that will give a crash if you are 
using any PSD method, as well as a few features were added:


Most importantly, pnd_size_gridFromScatMeta and 
MassSizeParamsFromScatMeta are removed.

Replaced with ScatSpeciesSizeMassInfo.

The new WSM sets three variables: scat_species_x, scat_species_a and 
scat_species. x refers to "size descriptor", that can be either mass, 
any diameter or area. The other two specify mass as


mass = a*x^b

This expression is applied even if x represents mass!

An example on a PSD definition using the new stuff:

ArrayOfAgendaAppend( pnd_agenda_array ){
  ScatSpeciesSizeMassInfo( species_index=agenda_array_index, 
x_unit="dveq", do_only_x=1 )

  Copy( psd_size_grid, scat_species_x )
  Copy( pnd_size_grid, scat_species_x )
  psdMH97( t_min = 10, t_max = phase_tlim, t_min_psd = 210 )
  pndFromPsdBasic
}

If the PSD is using a and b, the first WSM call in the agenda should 
look a bit different, e.g.:


ScatSpeciesSizeMassInfo( species_index=agenda_array_index, 
x_unit="dmax", x_fit_start=100e-6, x_fit_end=1 )


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Propagation matrix representation

2017-06-23 Thread Patrick Eriksson

Hi Richard, hi all,

I notice a trend here!

Simon is looking into how we shall treat covariance matrices. And he is 
also considering to introduce a special class, to allow that we can make 
use of the block structure of such matrices, and also to some extent 
make use of the symmetry in such matrices.


Yesterday Robin made the remark that we can save space for scattering 
matrices by instead store the amplitude matrix values. The number of 
free variables then decreases from 16 to 7. (Robin correct me if 
something is wrong) Seems reasonable for the scattering database. To be 
discussed if this also should be used inside ARTS.



Richard, I see your point and I think this is basically a good idea.

As I wrote in some other email, I think A is valid also for scattering. 
Jana and/or Robin, can you comment/confirm?


Some comments:

Richard, for the discussion you should maybe clarify where this approach 
should be applied. You just discuss "propagation matrix". This variable 
has unit 1/m, but your A must have unit [-]. That is, do you want to 
apply the scheme both on propagation matrices and extinction over some 
path length? As I see it, if introduced the approach should be used as 
far as possible and this has consequences around ARTS.


However, the idea of including variables to describe Faraday and Zeeman 
seems overly complex. What do you gain be postponing the impact of 
Faraday and Zeeman, instead of including the effect from start and let 
the rest of the machinery be blind to if b != 0 is caused by Zeeman or 
scattering?


For me it seems simplest to just apply a vector here, where [a, b ...w] 
have a specific order. In fact, this is exactly what we do in the format 
for storing scattering data. There we just store the independent values, 
and form the vectors and matrices when imported into ARTS. It would be 
very good if the order we use in the scattering data format could be 
used also in the propmat class.


Some comments before starting the midsummer celebrations!

Bye,

Patrick


On 2017-06-22 17:05, Richard Larsson wrote:

Hi all,

I would like to propose switching to an internal representation of the 
propagation matrix using a specially designed class, imaginatively named 
"PropagationMatrix".  Some background below.  Inputs wanted!


With the help of Philippe Baron, I added a new matrix-exponent function 
that is about 30X faster than the base implementation.  This 
implementation only works for the very specific case when


F = exp(A), and A =

  a  b  c  d
  b  a  u  v
  c -u  a  w
  d -v -w a

This seems to be the case for all matrices that we are concerned about 
in the propagation parts of ARTS, even in the scattering cases.  Is this 
true?


It is the case for clearsky anyways, so this is kinda nice to have at 
hands.  Thanks Philippe!


Got me thinking, though, that we are wasting a lot of memory and a lot 
of computing time keeping the entire 4X4 propagation-matrix around when 
all we really need is just the 7 independents a, b, c, d, u, v, and w.  
I was originally considering proposing an implementation change in 
propmat_clearsky to rather use Vectors of these parameters to represent 
the matrix.  However, that would be overly simplistic and might not be 
beneficiary enough to justify the extra work.


Instead, I would like to propose a similar class, PropagationMatrix, 
that can store the entire propagation matrix in parameterized form that 
is reduced to the seven variables above by simple mechanisms.


For unpolarized absorption, the parameterization would just keep a 
Vector of the absorption, i.e. "a" above.


For Faraday, it would also keep a single Vector of the rotation 
strength, but also a Numeric for the angle between the magnetic field 
and the line-of-sight.  These two are enough to generate "u" above.


For Zeeman, it would keep three Vectors of the strength and two angles 
for the magnetic field relative to the line-of-sight.  These are enough 
to generate the 7 parameters.


Are there other cases of propagation requiring care that I am missing or 
do we only have these three?  Probably, and I will simply store these as 
the 7 parameters.


When all is added up to a final product in the end, only then would the 
full seven Vectors be used (if necessary) before the transmission matrix 
is computed.


This PropagationMatrix will of course implement a way to know how to 
multiply itself with ARTS-matrices and Vectors as the current 
Matrix-representation does.  Numerical multiplications are simpler still.


This would require adding the PropagationMatrix to the global variables 
that can be read and viewed from the controllfile.  Otherwise, we would 
have to use the same memory-layout as before in every instance of 
interacting with other methods...


The major advantage is in the summation phase where each type of 
propagation matrix knows how it should behave.  Simple diagonal matrices 
don't have to sum to other things than the diagonal.  Faraday to "u".  

Re: [arts-dev] Scattering calculations when the cloud box is switched off

2017-06-07 Thread Patrick Eriksson

Hi Jana and Jakob , hi all,

Before commenting on this particular questions, I see a more general 
discussion here. There are many similar issues. Shall we focus on 
catching potential user mistakes/misunderstandings, or be less 
restrictive to simplify batch/operational processing? For example, I 
discussed with Simon today how OEM shall behave when an error occur 
during the iterations. (And I liked Simon's solution, OEM does not throw 
an error but flags the problem by an output argument. We must trust that 
the user checks that variable.) Hence, it would be good to come up with 
a general strategy. The question is when and how to discuss this?


There will be a bit of ARTS planning next week when Simon and I are in 
Hamburg. Don't know if we will get time to discuss also this, but maybe. 
So you others that will not be in Hamburg, if you have any opinion in 
this matter, send an email so we know about it, if we happen to reach 
this issue.



Regarding the present issue I think it should be possible to use the 
same set-up even if the cloudbox happen to end up to of zero size. If 
there should be some kind of "robust flag" or not, can be discussed.


This in line with my general view. We shall not be too restrictive in 
our tests. Real errors SHALL be caught, but as long as things are 
formally correct I think it is best to let things pass. In the end there 
could be a good reason for doing things that way.


Bye,

Patrick



On 2017-06-07 14:24, Jana Mendrok wrote:

Hi Jakob,

thanks for your feedback!
it was me who did that change. For the reason you also identified - that 
otherwise it easily goes unnoticed that actually no scattering has been 
done. This actually happened to me a few times. And considering that 
when calling the scattering solver, the user intends to actually perform 
a scattering calculation. I understand your issues, though.


Spontaneously, I don't see an option that satisfies both. Below a couple 
of options I can think of to deal with this issue (in the ps some option 
that you yourself could apply. without changes on the official code). 
Would appreciate feedback from other developers (and users), what you 
prefer, what is considered more important (my issues of course seem more 
important - to me. very subjective.). Or maybe you have better ideas how 
to solve that conflict.


so, code-wise we could (either):

- generally go back to the old behaviour.

- stay with the new behaviour.

- introduce a ("robust"?) option to allow the user to control the 
no-cloudbox behaviour.


- make cloudboxSetAutomatocally behave differently for clearsky cases 
(return a minimal cloudbox? and maybe let the user control which 
behaviour - minimal or no cloudbox - is applied?).


wishes,
Jana


ps. Some options, you yourself have, Jakob:

- you can of course locally remove the newly introduced error throwing 
and go back to the old behaviour in your own ARTS compilation.


- with the current version (no-cloudbox throws error) you could make a 
"cloudy" run (with empty results for the pure clearsky cases) and an 
explicit clearsky run and postprocess the results to whatever you need.


- you could use a manually set cloudbox (that can be better for some 
study setups anyways. ensures better comparability between different 
cases as then they equally affected by scattering solver errors 
(sphericity, vertical resolution, interpolation, etc.))



On Wed, Jun 7, 2017 at 1:26 PM, Jakob Sd > wrote:


Hi,

recently there has been a change in the way DOIT and DISORT handle
atmospheres where the cloud box is switched off (cloudbox_on = 0).
Before, they just skipped the scattering calculation, threw a
warning, and everything was ok, as the clear-sky calculations
afterwards took care of it.
But now, they throw a runtime error, which means that the
calculation is stopped and the results will be empty for that
atmosphere. I understand that this runtime error makes sense if
someone wants to calculate with scattering but by mistake switches
off the cloud box. But if someone has a batch of atmospheres from
which some are clear sky atmospheres and uses
cloudboxSetAutomatically, this can be quite uncomfortable, because
all the clear sky atmospheres that were correctly calculated before,
are now empty and the user has to manually select those atmospheres
from his batch and calculate them using clear sky ARTS.

Greetings from Hamburg,

Jakob

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de

https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi





--
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth an

[arts-dev] ARTS 2017 workshop

2017-02-22 Thread Patrick Eriksson

Dear radiative transfer friend,

We are pleased to announce a new workshop in the series of "ARTS 
workshops". You don't need to be an ARTS user or developer to 
participate, the workshop is open for all with an interest in 
atmospheric radiative transfer. There is normally a strong focus on 
microwave to infrared radiative transfer, but also other wavelength 
regions are of interest.


The place is the same as last time, Kristineberg (about 100 km north of 
Gothenburg). Time for the actual workshop: September 6-8 (Wednesday 
morning to Friday lunch). We will arrange transport between Gothenburg 
and Kristineberg, and it will departure from Gothenburg around 15.00 
September 5. That is you need to arrive to Gothenburg not too late Sep 
5, and should have possibility to travel back Sep 8.


The general goal of the workshop is as usual, that the ARTS user 
community (and also people working with other RT models) can meet, get 
to know each other, solve practical problems, and discuss the further 
development of the program. As always, we will have only a relatively 
small number of talks, and instead more time for group work and 
discussions. The present/planned main development of ARTS is directed 
towards

- Faster scattering calculations
- Running OEM inside ARTS
- Non-LTE
but the workshop is not restricted to these topics.

If you are interested in participating, then please fill in the 
pre-registration form at


https://arts.mi.uni-hamburg.de/service/workshop/arts2017.php

in order to allow us plan the program. The deadline for pre-registration 
is March 31. Since the available space at Kristineberg is limited, we 
have to limit the meeting to roughly 25 persons. If more persons are 
interested, it will be first come first served.


Kristineberg is a marine research station. The station offers full board 
and lodging, but the number of rooms is limited and most workshop 
participants will need to share double rooms. At the moment there are 
only three single rooms at hand. If you require a single room indicate 
this under Comments. Transport to/from Gothenburg is arranged at 
start/end of workshop. You only pay for room and food at Kristineberg. 
We can not yet give you en exact price, but it should be in the order of 
250 euro.


We send our best regards and hope to see you at Kristineberg,

Patrick Eriksson,
Stefan Buehler

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Fwd: 1st Summer Snowfall Workshop, 28-30 June 2017, Cologne, Germany

2017-01-19 Thread Patrick Eriksson

Hi all,

Information found below about a workshop with strong connection to 
ongoing ARTS development. For example, there should be some workshop 
contribution(s) associated with the database of single scattering 
properties we are developing.


Bye,

Patrick




 Forwarded Message 
Subject:1st Summer Snowfall Workshop, 28-30 June 2017, Cologne, Germany
Date:   Wed, 18 Jan 2017 14:38:34 +0100
From:   Stefan Kneifel 
To: 	zhad...@jifresse.ucla.edu , 
jani.tyyn...@fmi.fi , patrick.eriks...@chalmers.se 
, robin.ekel...@chalmers.se 
, Chris Westbrook 
, jussi.s.leino...@jpl.nasa.gov 
, zxj...@psu.edu , 
kwo-sen@nasa.gov , f.pr...@isac.cnr.it 
, christopher.willi...@colorado.edu 
, davide.o...@unibo.it 
, toshihisa.matsu...@nasa.gov 
, benjamin.t.john...@noaa.gov 
, e...@psu.edu , 
jana.mend...@gmail.com , Munchak, Stephen J. 
(GSFC-6120) , karina.mccus...@pgr.reading.ac.uk 
, alan.g...@ecmwf.int 
, ian.ad...@nrl.navy.mil , 
rhoneya...@gmail.com , David Mitchell 

CC: 	Moisseev, Dmitri , Mark Kulie 
, Claire Pettersen , 
gwpe...@wisc.edu, Pavlos Kollias, Prof. , 
Maximilian Maahn , g...@fsu.edu, 
mircea.grec...@nasa.gov, Prigent Catherine , 
Hans Verlinde , Matthew Kumjian , 
Alexander Ryzhkov - NOAA Affiliate , Silke 
Troemel , 'Clemens Simmer' 
, Robin Hogan , Battaglia, 
Alessandro (Dr.) , Tridon, Frederic (Dr.) 
, alexis.be...@epfl.ch




_*Save-the-date*_

*1st Summer Snowfall Workshop
*

*Scattering and microphysical properties of ice particles*


*28-30 June 2017, University of Cologne, Germany*


Dear colleagues,

as a follow-up of the productive discussion about microwave ice and snow 
scattering properties at the last IPWG-IWSSM workshop in Bologna, we 
want to organize a 3-day workshop at the University of Cologne (Germany) 
from 28-30 June 2017.


The main goals of this workshop are to:


_discuss the progress in developing single scattering databases:_

  * Existing and ongoing scattering databases
  * Definition of scattering data structure and conventions
  * Scattering database interface tools
  * Scattering database repository

_their applications:_

  * Bulk scattering properties
  * Scattering approximations
  * Particular requirements for passive and active applications
  * Guidelines and best practices for database users

_and to bridge the gap between scattering and microphysical properties 
of snow:_


  * In-situ properties of ice and snow particles
  * Observational constraints for scattering datasets


For the preparation of the workshop we would appreciate if you could 
provide us your feedback about your interest in attending the workshop 
before February 3rd 2017. More information and the link to the 
registration page will follow in a separate email.



Kind regards,

Stefan Kneifel and Dmitri Moisseev


--
***
Dr. Stefan Kneifel
OPTIMIce Emmy-Noether Group Leader
Institute for Geophysics and Meteorology
University of Cologne
Pohligstrasse 3 (Room 3.103), 50969 Cologne, Germany
sknei...@meteo.uni-koeln.de
Phone: +49 221 470 6272
---
http://www.researchgate.net/profile/Stefan_Kneifel
http://www.geomet.uni-koeln.de
***

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] ARTS workshop 2017, new date?

2017-01-11 Thread Patrick Eriksson

Happy new year,

Unfortunately it turned out that indeed a conference had slipped under 
the radar. The suggested dates collide with an AMS radar conference and 
some want to visit both that conference and the ARTS workshop.


For this reason, we have now checked with Kristineberg and we can move 
the workshop one week, to Sep 6-8. Would anyhow have problem with this 
time period? If you have, please email during this week. We would like 
to set the date during next week.


Bye,

Patrick and Stefan




First email:

Hi all,

Our Christmas present to you!

We have now decided to arrange a new ARTS workshop. We will keep the 
basic format of the workshop, and the venue will be the same as last 
time, Kristineberg north of Gothenburg.


We are aiming for August 30 to Sep 1 (2017). If this time period does 
not work for you, inform us and we will consider to change dates. (But 
we will only change if we have missed a clash with some important 
conference or something similar).


The official workshop announcement will come early next year.

Merry Christmas and a happy new year,

Patrick and Stefan

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Radar Monte Carlo Code

2016-12-22 Thread Patrick Eriksson

Hi Ian,

A very nice Christmas gift!

I am on vacation but could not stop myself to take a first look at your 
additions. Besides MCRadar itself, it's excellent that you have 
introduced a sensor based treatment of the polarization, that is a very 
old ToDo for me and presently one of the main shortcomings of ARTS. That 
stuff should also be incorporated into yCalc.


Did not expect ppathFromRtePos2 to be used here! That method is a bit of 
quick hack, and I am happy if it turned out to work sufficiently well 
for you (I expected much more crashes than 0.001%. Besides being a bit 
unstable, ppathFromRtePos2 is slow. I assume refraction must be 
considered for ground-based weather radars, but for other cases this can 
potentially be handled both much faster and safer.)


As yCloudRadar was also a quick thing from, I am also curious if you 
have done any comparisons?


Anyhow, most important right now is to get this into the repository 
version. Not clear here if you want an OK from us to commit these 
additions, or if making svn commits is not possible for you?


You have a big OK from me (and I assume from all others as well) to 
commit. This is great stuff and brings ARTS closer to be a complete tool 
for the microwave region.


(If we over here have to put this into svn it could take some time as it 
is holiday times. I have no time until Jan. Anybody else?)


Cheers,

Patrick



On 2016-12-21 21:27, Ian S. Adams wrote:

Dear ARTS Developers,

Attached is the Monte Carlo code for solving the radiative transfer
equation for a profiling weather radar. In developing the radar module,
I expanded the functionality within mc_radar to rotate from the antenna
frame to the ARTS reference frame used for radiative transfer
calculations. This includes functionality to rotate the polarization
basis to (and from for propagation away from the radar) the antenna
boresight polarization basis. A README document accompanies the the code
further explaining the additions.

Cheers,
Ian







   *Ian Stuart Adams*

   Electronics Engineer, Remote Sensing Division

   U.S. Naval Research Laboratory

   *T * 202.767.1937   *F * 202.767.7885   *DSN * 297.1937

   www.nrl.navy.mil







___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] ARTS workshop 2017

2016-12-21 Thread Patrick Eriksson

Hi all,

Our Christmas present to you!

We have now decided to arrange a new ARTS workshop. We will keep the 
basic format of the workshop, and the venue will be the same as last 
time, Kristineberg north of Gothenburg.


We are aiming for August 30 to Sep 1 (2017). If this time period does 
not work for you, inform us and we will consider to change dates. (But 
we will only change if we have missed a clash with some important 
conference or something similar).


The official workshop announcement will come early next year.

Merry Christmas and a happy new year,

Patrick and Stefan
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Abs lookup, introduce t_grid?

2016-11-15 Thread Patrick Eriksson

Hi Stefan,

I understand that setting abs_t to be constant is an option already now, 
but that does not give a speed improvement!? Or is there an internal 
switch that is triggered that I don't know about?


Bye,

Patrick


On 2016-11-15 15:06, Stefan Buehler wrote:

Hi Patrick,

you can use a constant reference T profile already now. I think it
roughly doubles the size of the lookup table, though.

/Stefan

On Tue, 15 Nov 2016 at 13:37, Patrick Eriksson
mailto:patrick.eriks...@chalmers.se>> wrote:

Hi all,

I struggled a bit to set up absorption lookup tables for our Odin/SMR
processing. For some frequency modes we extend the retrieval into the
thermosphere, and this causes problems. My reference temperature profile
is about 170 K at the mesopause, and accordingly abs_t_pert can not go
below about -160 K. This means that I have only a 160 K margin downwards
in the thermosphere, which is by far too narrow.

My present solution is to not allow the reference temperature be above
300 K, and instead have a abs_t_pert going to high positive values (+600
K). This works and is OK.

However, this got me to think. The simplest for me would in fact to set
abs_t to be e.g. 250 K at all altitudes, that would basically would give
me a fixed t_grid. Further, with modern computers where memory is not a
problem, maybe it is time to give up on using abs_t + abs_t_pert, and
instead just having a t_grid. That is, to have a standard "rectangular"
set-up, with a standard pressure and temperature grid.


I suggest to switch to a fixed t_grid as I think it could speed up the
interpolation significantly. I assume that with present abs-table, new
temperature grid positions must be calculated for each altitude (as
abs_t(i)+abs_t_pert varies). Stefan: Can you confirm this? Have you
considered the speed impact of this?

With a fixed t_grid, a given temperature has the same grid position at
all altitudes.


Any comments?

Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
<mailto:arts_dev.mi@lists.uni-hamburg.de>
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Abs lookup, introduce t_grid?

2016-11-15 Thread Patrick Eriksson

Hi all,

I struggled a bit to set up absorption lookup tables for our Odin/SMR 
processing. For some frequency modes we extend the retrieval into the 
thermosphere, and this causes problems. My reference temperature profile 
is about 170 K at the mesopause, and accordingly abs_t_pert can not go 
below about -160 K. This means that I have only a 160 K margin downwards 
in the thermosphere, which is by far too narrow.


My present solution is to not allow the reference temperature be above 
300 K, and instead have a abs_t_pert going to high positive values (+600 
K). This works and is OK.


However, this got me to think. The simplest for me would in fact to set 
abs_t to be e.g. 250 K at all altitudes, that would basically would give 
me a fixed t_grid. Further, with modern computers where memory is not a 
problem, maybe it is time to give up on using abs_t + abs_t_pert, and 
instead just having a t_grid. That is, to have a standard "rectangular" 
set-up, with a standard pressure and temperature grid.



I suggest to switch to a fixed t_grid as I think it could speed up the 
interpolation significantly. I assume that with present abs-table, new 
temperature grid positions must be calculated for each altitude (as 
abs_t(i)+abs_t_pert varies). Stefan: Can you confirm this? Have you 
considered the speed impact of this?


With a fixed t_grid, a given temperature has the same grid position at 
all altitudes.



Any comments?

Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] ARTS' SingeScatteringData

2016-11-09 Thread Patrick Eriksson

Hi all,

We (Jana and Patrick) have discovered several issues the last weeks
around the SingleScattering data format.

1. Definition of direction
-
A direction can be specified by how the photons move, or in what
direction you observe to detect the photons. The radiative transfer
functions in ARTS use the later definition, and call this a
line-of-sight (LOS). We have not found a clear statement if the
scattering data format assumes photon directions or LOS. In fact,
different assumptions have been made in DOIT and MC. In MC, LOS values
are mirrored before extracting scattering properties, while this is not
done in DOIT.
Our discussion of scattering data follows Mishchenko et al (2002) and we
should stick to it. With this interpretation, presently MC is doing the
right thing. As far as we understand, the issue has no influence on DOIT
for random orientation. For horizontally aligned particles, all is OK
for stokes_dim 1 and 2 (due to reciprocity), but there are issues
for higher stokes_dims (namely sign errors in the lower left and upper 
right

matrix blocks).


2. Azimuth angle
-
In ARTS' definition of LOS the azimuth angle is counted clockwise, while
for scattering data the azimuth angle goes in the opposite direction
(Fig 6.1 in ATD, and is consistency with Mishchenko et al (2002)). This
is not considered by either MC and DOIT, and should give a sign error
for stokes_dim 3 and 4.


3. Format for "horizontally aligned"
-
We have now realized that this format is not as general as we (at least
JM+PE) thought. It does not treat all horizontally aligned or azimuthally
randomly oriented particles. The (orientation averaged) particles must
also be symmetric around the horizontal plane. Such a symmetry will
rather be the exception when working with arbitrarily shaped particles
(and using DDA) and also, e.g., excludes realistically shaped rain drops.
We could introduce a new format for this, but that would make code and
documentation even more complicated.
Expressed simply and discussing the phase matrix, we currently store the
left part of the matrix holding data for incident and scattered zenith 
angles
(in table cols and rows, respectively).  By making use of the 
reciprocity theorem,
we could get away by storing just the upper triangle, i.e. with the same 
amount
of data as now. But that would make the internal storage more 
complicated and
require more heavy calculations to extract the data (not just sign 
changes are
needed, a transformation matrix, though simple, must be applied). So we 
just
simply suggest that we store the complete phase matrix. That is, the 
incoming
zenith directions will be [0,180] and not just [0,90] as now. And to 
keep things as

simple as possible we suggest to do the same for abs_vec and ext_mat.
We don't need to change the fields in the data format, but this should 
still be a

new version of the format. And when we are introducing a new format we would
also like to rename the "ptypes" as well, as "horizontally_aligned" is 
not a good

name when we start to work with tilted particles. We suggest the names
  totally_random
  azimuthally_random

(We are not 100% sure about some of the theoretical details, but the
three main remarks should still be valid.)

Any comments or opinion?

We (mainly Jana) plan to start attacking these things relative soon. If
anybody wants to help out in the revision, please let us know.

Bye,

Patrick and Jana
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] New ARTS features

2016-10-19 Thread Patrick Eriksson

Dear ARTS users,

after a period with somewhat slower development of ARTS, we are again in
an active period. Some stuff has already been added and we are planning 
some more additions. With this email we want to briefly announce these 
features, and on the same time clarify how we add new features and how 
we prefer that they are used.


We are as usual committing our changes to the development version
(presently v2.3, later v2.5 or v3.1). Hence, the additions are available
from day 1. Or rather, the additions are at hand already before they are
ready and properly tested.

The alternative would be an internal development branch and releasing
the additions only after we have tested the new feature, and published
an article about it. We want to avoid this. It would make the
maintenance of ARTS more complicated.  More importantly, it would reduce
the number of persons that give feedback and contribute to the testing
of the new feature.

That is, we happily see that you are using new and experimental 
features, as long as it is done in collaboration with us. You will then

get help to make sure that the new feature is used as intended, and we
get feedback that helps us to improve things. To be clear, we prefer
that the main developer(s) on our side is included in publications where
new ARTS features are used. The normal end point of this period is when
we have made a publication that introduces the addition.

Here is a list of recent and planned additions, and the main person to
contact if you want to start using it:

DISORT and RT4: Jana (more or less clear additions)

OEM: Patrick (ready, but with limited scope compared to Qpack)

Single scattering data: Robin/Patrick and Manfred (first data should be
added to ARTS site soon)

Running 1D scattering solver on 3D atmospheres: Patrick (to be implemented)

Oxygen line mixing: Richard Larsson

non-LTE: Richard Larsson (in early development)

New standard setups for meteorological sensors: Alex Bobryshev

More robust DOIT scattering solver: Jacob / Stefan

TYPHON Python interface: Lukas / Oliver

DOIT Jacobians: Jana

Mapping of LWC, IWC and RWC to pnd_fields: Jana / Manfred / Verena

New surface features: Patrick

Email address to persons mentioned found in cc. If you have just general
questions about these or other ARTS features, please send the question
to arts-users instead. On our side, we will try to make a small
announcement on the arts mailing lists when we consider a new feature to
be relatively stable and could be of interest for others. That is, more
information will follow.

Kind regards,

Stefan and Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] scat_data issues

2016-08-26 Thread Patrick Eriksson

Hi Stefan,

In general I agree to not put all checks in the same function. But this 
is not a clear cut case. The size of scat_data shall be consistent with 
pnd_field, that in its turn should be consistent with cloudbox_limits. 
So no clear place where to separate scat_data and other cloudbox checks.


In fact cloudbox_checkCalc already contains some checks of scat_data. 
Which makes sense for me. scat_data, pnd_field and cloudbox_limits are 
the main "cloudbox variables".


Another reason that if we introduce scat_data_check, we must add it to a 
number of ARTS WSMs. And modify many many cfiles. Just  personally 
probably need modify about 50 scripts if scata_dataCheck becomes mandatory.


Bye,

Patrick




On 2016-08-26 13:38, Stefan Buehler wrote:

Hi all,

I agree with Patrick that a mandatory check method is the best
solution. :-)

But modernising scat_dataCheck and making it mandatory (via a
_checked flag) seems cleaner to me than including it in the cloudbox
check. Better not to entangle issues that can be regarded separately.


These check functions seem to have evolved as a general strategy, I
think they are quite intuitive and user friendly. More so if their
scope is limited and clear, less if there is a single huge check
function that does all kinds of things.

All the best,

Stefan

P.s.: Jana, I think this is really great work, and very timely.
Oliver and I spent some time yesterday looking at your test case and
debugging the scat data merging method. So, the comparison of the
different solvers has already been fruitful to squash bugs.


On 26 Aug 2016, at 08:59, Patrick Eriksson
 wrote:

Hi Jana,

I did not have scat_dataCheck active in my memory. I think this is
a lesson that non-mandatory stuff will be forgotten. To avoid nasty
errors and incorrect results, we should make most basic checks
mandatory.

My suggestion is then to extend cloudbox_checkedCalc. My view on
cloudbox_checkedCalc is that the tests we discuss are in fact
inside the scope of the WSM. So just strange that they are not
already there!

(If some tests turn out to be computationally demanding, then I
prefer to have option flags of type "I know what I am doing", to
deactive the checks.)

Regarding normalisation, how big difference is there between
quadrature rules? 1%, 10% or 100%? Seems reasonable to at least
check that normalisation is OK inside a factor of 2. (With an
option to deactive this, if you use a solver anyhow checking
this.)

Bye,

Patrick



On 2016-08-25 20:10, Jana Mendrok wrote:

Hi,

i'm currently implementing an interface to the RT4 solver and am
testing it. that was at least the original plan. partly it
however turns out to be more of "fall into the traps" and
"stumble upon issues" with other the solver (which i intended to
use as reference)...

current issue i stumbled upon is that there seems to be no
(sufficiently) rigid test on validity (or at least
eligibility/adequacy/proper qualification) of the scattering
data (scat_data).

i've created my scat_data from ARTS' TMatrix interface. Happened
that one of the particles was too challenging for TMatrix and
produced a couple of NaN and also negative extinction and
absorption coefficients (K11 and alpha1). while NAN could be
avoided (equ. to a TMatrix fail), it's hardly possible for the
negatives (they are "regular" TMatrix output.

ARTS scatt solvers reacted very different on the presence of
these invalid data: - RT4 gave a runtime error due to scatt
matrix normalization being too far off (guess, Disort would do
the same. wasn't tested here as I used oriented particles, which
aren't handled by Disort). - DOIT ran through providing results
that looked not immediately suspicious :-O - MC ran into an
assertion within an interpolation.

that's quite unsatisfactory, i think, and should be handled in
some consistent manner, i think. question is how. some ideas
below. do you have some further ideas or suggestions or
comments?

appreciate any input. wishes, Jana


my thoughts/ideas:

- leave it to each solver to check for that (but then we need to
go over them to do that)?

- make an additional check method and another check variable for
the scat_data? there is already a scat_dataCheck WSM. which is
rarely used. it e.g. checks that scat_data cover the required
frequencies, but also the scatt matrix normalization, the latter
only available for random orientation, though. in my experience
it hasn't been too helpful (data coming from atmlab Mie interface
- as well as from ARTS' Tmatrix interface as i learned these days
- frequently don't pass the normalisation check. which is beside
others due to the type of quadrature used. according to my
experience, such norm issues are better handled by each solver
separately), and since it's not mandatory, i avoid it. but it
would be an option to modify this (make the norm check optional,
instead check for nan and nega

Re: [arts-dev] scat_data issues

2016-08-26 Thread Patrick Eriksson

Hi Jana,

I did not have scat_dataCheck active in my memory. I think this is a 
lesson that non-mandatory stuff will be forgotten. To avoid nasty errors 
and incorrect results, we should make most basic checks mandatory.


My suggestion is then to extend cloudbox_checkedCalc. My view on 
cloudbox_checkedCalc is that the tests we discuss are in fact inside the 
scope of the WSM. So just strange that they are not already there!


(If some tests turn out to be computationally demanding, then I prefer 
to have option flags of type "I know what I am doing", to deactive the 
checks.)


Regarding normalisation, how big difference is there between quadrature 
rules? 1%, 10% or 100%? Seems reasonable to at least check that 
normalisation is OK inside a factor of 2. (With an option to deactive 
this, if you use a solver anyhow checking this.)


Bye,

Patrick



On 2016-08-25 20:10, Jana Mendrok wrote:

Hi,

i'm currently implementing an interface to the RT4 solver and am testing
it. that was at least the original plan. partly it however turns out to
be more of "fall into the traps" and "stumble upon issues" with other
the solver (which i intended to use as reference)...

current issue i stumbled upon is that there seems to be no
(sufficiently) rigid test on validity (or at least
eligibility/adequacy/proper qualification) of the scattering data
(scat_data).

i've created my scat_data from ARTS' TMatrix interface. Happened that
one of the particles was too challenging for TMatrix and produced a
couple of NaN and also negative extinction and absorption coefficients
(K11 and alpha1). while NAN could be avoided (equ. to a TMatrix fail),
it's hardly possible for the negatives (they are "regular" TMatrix output.

ARTS scatt solvers reacted very different on the presence of these
invalid data:
- RT4 gave a runtime error due to scatt matrix normalization being too
far off (guess, Disort would do the same. wasn't tested here as I used
oriented particles, which aren't handled by Disort).
- DOIT ran through providing results that looked not immediately
suspicious :-O
- MC ran into an assertion within an interpolation.

that's quite unsatisfactory, i think, and should be handled in some
consistent manner, i think. question is how.
some ideas below. do you have some further ideas or suggestions or comments?

appreciate any input.
wishes,
Jana


my thoughts/ideas:

- leave it to each solver to check for that (but then we need to go over
them to do that)?

- make an additional check method and another check variable for the
scat_data?
there is already a scat_dataCheck WSM. which is rarely used. it e.g.
checks that scat_data cover the required frequencies, but also the scatt
matrix normalization, the latter only available for random orientation,
though. in my experience it hasn't been too helpful (data coming from
atmlab Mie interface - as well as from ARTS' Tmatrix interface as i
learned these days - frequently don't pass the normalisation check.
which is beside others due to the type of quadrature used. according to
my experience, such norm issues are better handled by each solver
separately), and since it's not mandatory, i avoid it.
but it would be an option to modify this (make the norm check optional,
instead check for nan and negative values) and make it mandatory
(through a checked flag).

- an issue is, of course, that one does not really want to check data,
that is frequently used, each time again (e.g. the arts-xml-data
contents, data from the future ice/snow SSP database we are creating...)
for those invalid entries. so maybe giving the data structure itself a
flag and providing a WSM that does that checking and resets the flag?
the above checked method could e.g. look for this flag and apply the
whole check suite only on so far unchecked data.



--
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] jacobian variable order

2016-04-27 Thread Patrick Eriksson

Hi Simon.

Sorry this is not yet documented.

The storing order should be pressure, latitude and longitude. That is 
latitudes runs faster then longitude.


But could be mistakes here. Have I done anything wrong in the new 
methods setting up xa etc?


Bye,

Patrick

On 04/27/16 09:45, Simon Pfreundschuh wrote:

Dear all,


I was running in some trouble with 2D OEM computations that didn't converge
and tracked it down to the x vector being reshaped to the retrieval grid
differently
than what is implied by the Jacobian. I know now that pressure is the
fastest running
index and latitude runs slower, but what about longitudes?

I searched for information on this in all three guides plus the online
doc but couldn't find
anything.

Best,

Simon


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Error caused by partial derivative of lineshape function

2016-03-31 Thread Patrick Eriksson

Hi,


if not needed, perhaps we could simply remove the less accurate Kuntz
lineshapes, and just keep the Kuntz6 one?



OK for me.

And in general, things like this should be documented in some manner. 
For example, could it be marked in abs_lineshapeDefine what lineshapes 
that supports Jacobians?


That said, is it not a bit odd that to find a definition of linesshapes 
you need to do:


arts -d abs_lineshapeDefine

and not

arts -d abs_lineshape

? At least I tried the last version first when updating my memory.

Bye,

Patrick




/Stefan


On 31 Mar 2016, at 17:25, Richard Larsson 
wrote:

Hello Ole,

Yeah, it is because we moved a lot of the partial derivatives to
lower levels.  You are asking for a partial derivative of something
that requires knowing the partial derivative of the line shape with
respect to your variable.  This is trivial if the function returns
the phase shift but not so trivial otherwise.  In short, it
requires a numerical implementation for the faster line shapes.

The only faster line shape with a numerical implementation so far
is Voigt_Kuntz6.  Can you change this to be your line shape?

//Richard

2016-03-31 16:18 GMT+02:00 Ole Martin Christensen
: I trying to rerun some old code
got the following error message from my arts today:


Run-time error in controlfile:
/tmp/atmlab-olemar-tp233502eb_6598_4500_90ae_1aeaf2753936/cfile_yj.arts



Run-time error in method: yCalc

Run-time error in function: iyb_calc Run-time error in agenda:
iy_main_agenda Run-time error in method: iyEmissionStandard
Run-time error in agenda: propmat_clearsky_agenda Run-time error in
method: propmat_clearskyAddOnTheFly Run-time error in agenda:
abs_xsec_agenda Run-time error in method:
abs_xsec_per_speciesAddLines This is an error message. You are
using Voigt_Kuntz3. Your selected *jacobian_quantities* requires
that the line shape returns partial derivatives. Stopping ARTS
execution. Goodbye.


anyone know why?


Ole Martin

___ arts_dev.mi mailing
list arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___ arts_dev.mi mailing
list arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___ arts_dev.mi mailing
list arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi