Re: Surface properties

2024-01-19 Thread Patrick Eriksson

Leo,

Assuming you are using pyarts, you find documentation here:

https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.GriddedField2.html#pyarts.arts.GriddedField2

See especially point 3 under __init__.


For completeness, if using xarray here is another option:

https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts_ext.GriddedFieldExtras.from_xarray.html#pyarts.arts_ext.GriddedFieldExtras.from_xarray

Bye,

Patrick


On 2024-01-18 17:21, leopio.dadde...@artov.isac.cnr.it wrote:

Dear Patrick,

many thanks for your reply. Anyway, Ithink I was not too clear. I am 
working with matrix skin temperature (as well as for wind at 10 m or 
land/ocean mask). I read both skin temperature and wind from ERA5 data, 
so the data change according to my needs (obviously, the land/ocean mask 
is always the same). Your code is designed only for scalar constant skin 
temperature value. Is it correct?


Thanks,
Leo Pio





Dear Leo Pio,

Below, you find Python code that you can use if you are working with 
scalar skin_temperatures. Otherwise, I assume this is enough for you 
to make a more general method.


Bye,

Patrick


def GriddedField2GloballyConstant(
    ws: Workspace,
    name: str,
    value: float,
) -> None:
    """
    Sets a WSV of type GriddedField2Create to hold a constant value.

    The WSV is assumed to represent geographical data, and the 
dimensions are set

    to Latitude and Longitude. The data are defined to cover the complete
    planet.

    :param ws: Workspace.
    :param name:   Name of WSV to fill.
    :param value:  Fill value.
    """

    gf2 = pa.GriddedField2()
    gf2.gridnames = ["Latitude", "Longitude"]
    gf2.grids = [np.array([-90, 90]), np.array([-180, 360])]
    gf2.data = np.full((2, 2), value)
    gf2.name = "Generated by easy arts function gf2_constant_set"
    setattr(ws, name, gf2)


On 2024-01-17 10:08, leopio.dadde...@artov.isac.cnr.it wrote:

Dear ARTS community,

within my simulation, I am setting the surface properties. In 
particular, when I use Tessem and Telsem model to calculate 
emissivity and reflectivity of ocean and land, respectively, I need 
as input the wind at 10m, the skin temperature, a land/ocean mask 
(among the others). I read these variable from NetCDF files using 
python (i.e. NetCDF4 and Numpy).
In the related agenda, I use "InterpGriddedField2ToPosition" which 
requires a "GriddedField2" variable (i.e. skin temperature, or wind) 
as input. To create this variable, I do the following (for instance 
eith skin temperature):


ws.GriddedField2Create("SkinTemperature")
ws.Copy(ws.skinTemperature,skinTemperature)

At this point, I get the following error:
"Could not convert input [here there are the values of the skin 
temperature matrix] to expected group GriddedField2."


Where am I wrong? "Copy" is a method to fill a GriddedField2 variable.

I hope I was clear, any help is welcomed. Thanks.

Leo Pio











Re: Surface properties

2024-01-17 Thread Patrick Eriksson

Dear Leo Pio,

Below, you find Python code that you can use if you are working with 
scalar skin_temperatures. Otherwise, I assume this is enough for you to 
make a more general method.


Bye,

Patrick


def GriddedField2GloballyConstant(
ws: Workspace,
name: str,
value: float,
) -> None:
"""
Sets a WSV of type GriddedField2Create to hold a constant value.

The WSV is assumed to represent geographical data, and the 
dimensions are set

to Latitude and Longitude. The data are defined to cover the complete
planet.

:param ws: Workspace.
:param name:   Name of WSV to fill.
:param value:  Fill value.
"""

gf2 = pa.GriddedField2()
gf2.gridnames = ["Latitude", "Longitude"]
gf2.grids = [np.array([-90, 90]), np.array([-180, 360])]
gf2.data = np.full((2, 2), value)
gf2.name = "Generated by easy arts function gf2_constant_set"
setattr(ws, name, gf2)


On 2024-01-17 10:08, leopio.dadde...@artov.isac.cnr.it wrote:

Dear ARTS community,

within my simulation, I am setting the surface properties. In 
particular, when I use Tessem and Telsem model to calculate emissivity 
and reflectivity of ocean and land, respectively, I need as input the 
wind at 10m, the skin temperature, a land/ocean mask (among the others). 
I read these variable from NetCDF files using python (i.e. NetCDF4 and 
Numpy).
In the related agenda, I use "InterpGriddedField2ToPosition" which 
requires a "GriddedField2" variable (i.e. skin temperature, or wind) as 
input. To create this variable, I do the following (for instance eith 
skin temperature):


ws.GriddedField2Create("SkinTemperature")
ws.Copy(ws.skinTemperature,skinTemperature)

At this point, I get the following error:
"Could not convert input [here there are the values of the skin 
temperature matrix] to expected group GriddedField2."


Where am I wrong? "Copy" is a method to fill a GriddedField2 variable.

I hope I was clear, any help is welcomed. Thanks.

Leo Pio







Re: [EXTERNAL] [BULK] 3D MC

2023-12-06 Thread Patrick Eriksson

Ian,

Thanks for the input. Great that you have stress-tested MC. Too bad that 
it revealed a limitation.


Good suggestion about iyMC. Today it would not be possible to do the 
random sampling from yCalc, it would require information on the sensor 
not at hand inside yCalc today. But we are planning to redesign the way 
the sensor is described, and this should be considered.


Not totally sure exactly what you mean with using MC sampled antenna 
pattern more broadly, but I tend to agree. It would be good if there 
would be mechanisms to give monochromatic pencil beam calculations some 
width in frequency and space. It would speed up simulations of 
observations. As example, I have been playing around with a scheme to 
locally average the surface emissivity around the point you hit the 
surface, to make simulations in coastal areas faster.


And yes, the most tricky part is finding the time for the work.

Bye,

Patrick


On 2023-12-06 16:09, Adams, Ian S {he, him, his} (GSFC-6120) wrote:

Hi Stefan,

I have been contemplating changes to the MC codes. One thing we have found is 
that MCGeneral breaks down when Q starts to get large. We see unrealistic 
results at 684 GHz when using horizontally aligned particles with high aspect 
ratio. Yuli Liu, who is working with us now, did comprehensive analysis, and we 
believe that the issue is the way the backwards algorithm is using importance 
sampling to avoid the issue of inverting the extinction matrix; however, this 
approach neglects the mixing of I and Q. I believe this is a simple fix.

The other issue is that MCGeneral is not very ARTS-like. Looking at the way it is 
structured, I think a better approach would be to have an iyMC that traces a single 
"photon," and yMC would integrate these individual results. Random sampling of 
both the antenna pattern and the bandwidth could be performed at this level. I also think 
that the MC sampled antenna pattern could be more widely useful across ARTS.

These papers provide an interesting curveball. The ARTS MC codes are 
particularly slow, and they are not optimized for optically thin or extremely 
optically thick atmospheres. We could look at using these libraries, or at 
least techniques, but I'm not sure how intensive such a restructuring of the 
code would be.

Of course, the tricky piece here is finding someone with the time to do this 
work. But, I think these changes would make the codes significantly more 
usable, and hopefully therefore used.

Cheers,
Ian

On 11/29/23, 11:22 AM, "arts_dev.mi on behalf of Stefan Buehler" 
mailto:arts_dev.mi-boun...@lists.uni-hamburg.de> on 
behalf of stefan.bueh...@uni-hamburg.de > wrote:


Dear all,


I stumbled accross this interesting paper on an open C library for particularly 
efficient MC calculations. Could this be the basis of ARTS 3D MC flux and 
heating rate calculations? Using MC sampling also for the spectral dimension, 
to be efficient, as in the second paper, which is also impressive, I think. 
They use MC sampling even for the spectral lines, if I got it right! (Basically 
treating each transition as if it were its own absorption species.)


/Stefan


https://www.dropbox.com/scl/fi/smsisfgc2it3sx4gov970/J-Adv-Model-Earth-Syst-2019-Villefranque-A-Path-E2-80-90Tracing-Monte-Carlo-Library-for-3-E2-80-90D-Radiative-Transfer-in-Highly.pdf?rlkey=v5yvrm64fnljaf739j4ssllux=0
 



https://www.dropbox.com/scl/fi/r1tm3jdzx57kb85nowmt0/Yaniss_ea_PNAS_2023_smi.pdf?rlkey=8d4a7rb4u8pehckawbfk08c9f=0
 




Re: RTE_POS

2023-12-06 Thread Patrick Eriksson

Hi,

If your version has geo_pos_agenda, you should put geo_posEndOfPpath in 
that agenda.


If no such agenda, geo_posEndOfPpath should be placed inside iy_main_agenda.

In any case, you should not need to do extra calculations, y_geo should 
be set in a standard call of yCalc.


Bye,

Patrick

On 2023-12-06 15:44, leopio.dadde...@artov.isac.cnr.it wrote:

Patrick,

I followed you suggestion, very useful. I am able to get geo_pos (i.e. 
y_pos) but it has only NaNs. "geo_posEndOfPpath" needs as input "ppath", 
which I generate from "PpathCalc", which in turn requies (among the 
others) "rte_pos", "rte_los" and "rte_pos2".
Here my first doubt. "rte_pos2" should be the result of the combination 
of "rte_pos" and "rte_los". Anyway, I set rte_pos=sensor_pos (satellite 
position) and rte_los=[180,0] (that should be nadir looking). I set 
rte_pos2=[0,0,0] but I am totaly not sure about "rte_pos2". If you could 
shed light on this, it would be very useful for me.


Thanks,
Leo Pio


Leo,

If you want to know the complete path through the atmosphere, you can 
do as you outline. If you only are interested in where you end up at 
the surface, you can use the geo_pos mechanism. You need to set 
geo_pos by adding the WSM: geo_posEndOfPpath


Exactly how geo_pos is handled has been changed, and I don't remember 
exactly the status in v2.5.0. But I hope you can figure it out.


With this done, the "geo pos" comes out from yCalc as y_geo.

Please note that you get out proper lat and lon only if running 3D 
calculations. For 1D you bascially get some relative lat and lon.


Bye,

Patrick


On 2023-11-29 10:57, leopio.dadde...@artov.isac.cnr.it wrote:

Hi Richard,

many thanks for your answer. I try to answer to your questions.
I am using ARTS 2.5.0
My entry point is 'yCalc', you are correct. I have some Python script 
which call ARTS commands, so I would say that I run ARTS via custom 
language interface.
Currently, I am getting and saving 'sensor_pos' and 'sensor_los' 
(that match 'y_pos' and 'y_los' but are not the same, right?). But, 
if I understand well, you are saying that I can set 'rte_pos2' and 
'rte_los' equal to 'y_pos' and 'y_los' and then run 'ppathCalc'.


Best,
Leo Pio




Hi Leo,

What you have encountered can be shortly summarized as rte_pos only
existing inside the Agenda you call.  You don't have it at hand 
anywhere
else.  rte_pos also does not represent what you think it does, it is 
simply

a radiative transfer equation position and it can be anywhere inside or
outside of the atmosphere.

Before any other specific help can be given, you need to specify what
version of ARTS you are using?  Are you running ARTS via python or 
via the

custom language interface?  Is your entry point to the calculations via
`yCalc`?

Because those details matter for the answer you might need.  
Generally, if
you want to investigate the atmospheric path you are using, you will 
want
to generate a `ppath` and extract the relevant information.  The way 
to do
that depends on the answers above and any attempt to answer this 
without

first filling in these details will give details that are perhaps not
needed.

If you are running it via `yCalc`, you get `y_pos` and `y_los` as 
outputs.

Those can be used to generate `rte_pos{,2}` and `rte_los` required for
`ppathCalc` to run.  You can then extract the relevant information 
from the
generated `ppath` either via custom language commands or just by 
accessing

the data it holds in python.  The documentation for accessing data in
ppath for the latest version of ARTS available via conda-forge can 
be found

here:
https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.Ppath.html#pyarts.arts.Ppath

//Richard

Den tis 28 nov. 2023 kl 15:51 skrev 
:



Dear ARTS community,

I am a new user of ARTS. One of my tasks is to simulate passive
microwave radiometers onboard low Earth orbit satellite. To this end,
I would know which region of Earth surface the satellite is looking
at. I set the satellite position through "sensor_pos" and the line of
sight of the satellite through "sensor_los". When I try to get and
save on a XML file the geographical position for starting radiative
transfer calculation (i.e. rte_pos), I get the following error:

Method WriteXML needs input rte_pos but it is uninitialized.

Can anyone help me on this? Many thanks.

Best regards,
Leo Pio













Re: RTE_POS

2023-11-29 Thread Patrick Eriksson

Leo,

If you want to know the complete path through the atmosphere, you can do 
as you outline. If you only are interested in where you end up at the 
surface, you can use the geo_pos mechanism. You need to set geo_pos by 
adding the WSM: geo_posEndOfPpath


Exactly how geo_pos is handled has been changed, and I don't remember 
exactly the status in v2.5.0. But I hope you can figure it out.


With this done, the "geo pos" comes out from yCalc as y_geo.

Please note that you get out proper lat and lon only if running 3D 
calculations. For 1D you bascially get some relative lat and lon.


Bye,

Patrick


On 2023-11-29 10:57, leopio.dadde...@artov.isac.cnr.it wrote:

Hi Richard,

many thanks for your answer. I try to answer to your questions.
I am using ARTS 2.5.0
My entry point is 'yCalc', you are correct. I have some Python script 
which call ARTS commands, so I would say that I run ARTS via custom 
language interface.
Currently, I am getting and saving 'sensor_pos' and 'sensor_los' (that 
match 'y_pos' and 'y_los' but are not the same, right?). But, if I 
understand well, you are saying that I can set 'rte_pos2' and 'rte_los' 
equal to 'y_pos' and 'y_los' and then run 'ppathCalc'.


Best,
Leo Pio




Hi Leo,

What you have encountered can be shortly summarized as rte_pos only
existing inside the Agenda you call.  You don't have it at hand anywhere
else.  rte_pos also does not represent what you think it does, it is 
simply

a radiative transfer equation position and it can be anywhere inside or
outside of the atmosphere.

Before any other specific help can be given, you need to specify what
version of ARTS you are using?  Are you running ARTS via python or via 
the

custom language interface?  Is your entry point to the calculations via
`yCalc`?

Because those details matter for the answer you might need.  
Generally, if

you want to investigate the atmospheric path you are using, you will want
to generate a `ppath` and extract the relevant information.  The way 
to do

that depends on the answers above and any attempt to answer this without
first filling in these details will give details that are perhaps not
needed.

If you are running it via `yCalc`, you get `y_pos` and `y_los` as 
outputs.

Those can be used to generate `rte_pos{,2}` and `rte_los` required for
`ppathCalc` to run.  You can then extract the relevant information 
from the
generated `ppath` either via custom language commands or just by 
accessing

the data it holds in python.  The documentation for accessing data in
ppath for the latest version of ARTS available via conda-forge can be 
found

here:
https://atmtools.github.io/arts-docs-master/stubs/pyarts.arts.Ppath.html#pyarts.arts.Ppath

//Richard

Den tis 28 nov. 2023 kl 15:51 skrev :


Dear ARTS community,

I am a new user of ARTS. One of my tasks is to simulate passive
microwave radiometers onboard low Earth orbit satellite. To this end,
I would know which region of Earth surface the satellite is looking
at. I set the satellite position through "sensor_pos" and the line of
sight of the satellite through "sensor_los". When I try to get and
save on a XML file the geographical position for starting radiative
transfer calculation (i.e. rte_pos), I get the following error:

Method WriteXML needs input rte_pos but it is uninitialized.

Can anyone help me on this? Many thanks.

Best regards,
Leo Pio









Re: Error with OEM retrieval in ARTS

2023-06-15 Thread Patrick Eriksson

Stuart,

The built-in doc of OEM clarifies that x is both IN and OUT. But there 
is no explanation of what the input states mean. We need to work the 
documentation!


But there is some help in

/controlfiles/artscomponents/oem/TestOEM.arts

Here you find:

# x, jacobian and yf must be initialised (or pre-calculated as shown below)
#
VectorSet( x, [] )
VectorSet( yf, [] )
MatrixSet( jacobian, [] )


# Or to pre-set x, jacobian and yf
#
#Copy( x, xa )
#MatrixSet( jacobian, [] )
#AgendaExecute( inversion_iterate_agenda )


My memory is that if you leave x empty, it is set to xa. The other 
option is there to allow you to start the iteration from another state.


I don't think we have changed this recently. So rather strange that your 
old setup worked. Anyhow, I hope this clarifies how to remove the error.


Bye,

Patrick



On 2023-06-15 14:58, Stuart Fox wrote:

Hi developers,

I have an ARTS OEM retrieval set-up that used to work fine based on the 
ARTS trunk from Feb 2022, but recently I’ve updated to the latest 
development version of ARTS and when calling the workspace.OEM() method 
it fails with “Not initialised: x”. Any clues on how to fix this? I am 
initialising the retrieval with workspace.xaStandard(), so I think I am 
correctly initialising the value of xa – it’s not obvious to me why I 
should have to initialise x at all (since presumably it should always be 
set to the same value as xa to begin with?)


Thanks,

Stuart

Dr Stuart Fox  Radiation Research Manager
*Met Office* FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
Tel: +44 (0)1392 885197  Fax: +44 (0)1392 885681
Email: stuart@metoffice.gov.uk  Website: www.metoffice.gov.uk



Re: Fwd: [arts-users] ARTS ICI Cloud Simulations

2022-03-21 Thread Patrick Eriksson

Hi,

Sorry, I should have informed you. Kyle wrote to me on the side as well, 
and I there asked for more details and then answered separately.


Not perfect. Next time I will force him to take all on arts-users.

Bye,

Patrick



On 2022-03-21 11:46, stefan.bueh...@uni-hamburg.de wrote:

Hi all, is there anyone that can take this? Stefan


Anfang der weitergeleiteten Nachricht:

*Von: *Kyle Johnson >

*Betreff: **[arts-users] ARTS ICI Cloud Simulations*
*Datum: *13. März 2022 um 18:06:03 MEZ
*An: *arts_users...@lists.uni-hamburg.de 

*Antwort an: *kyle.johnso...@colorado.edu 



Hello,
I was wondering if you all had a version of the controlfile ICI 
simulation in ARTS that included clouds. My name is Kyle Johnson and I 
am a graduate student at CU Boulder. I have been using the ICI 
simulation in ARTS as the basis for an independent study project and 
need to add clouds in. I have tried adding clouds in on my end and 
have been unsuccessful.

Thank you for your time,
Kyle Johnson (he/him/his)
Graduate Student /University of Colorado Boulder/
___
arts_users.mi mailing list
arts_users...@lists.uni-hamburg.de 


https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_users.mi




Re: Fwd: Eradiate Workshop 2022

2022-03-01 Thread Patrick Eriksson

Stefan and all,

I can  not go, busy with teaching and to high overall load. From the 
Chalmers side, the best candidate is Vasilis. He has developed MC code 
for the optical region. He is now in Greece and has a relatively short 
travel. But he is employment at Chalmers ends June 30. So would be 
better if someone else could go.


Stefan: Do you know more about how they got funding for this? Sounds as 
we could promote ARTS as something similar for microwaves to IR. This 
looks at the type of funding we have been lacking.


Bye,

Patrick





On 2022-03-01 09:43, stefan.bueh...@uni-hamburg.de wrote:

Dear all,

I know Yves, so this is ligit. Perhaps someone from us should 
participate in the release workshop? This is perhaps similar to the 3-D 
Monte Carlo they do in Munich. But open. Focused on the solar spectral 
range, of course.


Stefan


Anfang der weitergeleiteten Nachricht:

*Von: *mailto:n...@eradiate.eu>>
*Betreff: **Eradiate Workshop 2022*
*Datum: *28. Februar 2022 um 16:58:41 MEZ
*An: *>


Dear Stefan Buehler,

You are receiving this email because you were identified as a 
radiative transfer model user or developer.


The development ofEradiate , a new 3D 
radiative transfer model, started in 2019 with the goal to create a 
novel simulation platform for radiative transfer applied to Earth 
observation. Eradiate intends to be highly accurate and uses advanced 
computer graphics software as its Monte Carlo ray tracing kernel. It 
provides a modern Python interface specifically designed for the 
integration in interactive computing environments. It is also free 
software licensed under the GNU Public License v3.


At the end of March 2022, Eradiate will be released to the public and 
open to contribution from users. On this occasion, the Eradiate team 
will organise a workshop, kindly hosted by ESA/ESRIN in Frascati on 
Tuesday March 29^th and Wednesday March 30^th , 2022. This workshop 
will be organised with a hybrid setup allowing for remote participants 
to attend. Participation to the workshop is open and you can register 
by replying to this email, providing the following information:


·First name
·Last name
·Contact email address
·Affiliation

·Whether you wish to join us in Frascati or prefer to attend remotely

Please be aware that the number of on-premises seats is limited, 
assigned on a first come, first served basis. Registration for 
on-premises participation will be closed on March 21^st , 2022.


The workshop announcement letter, with further information of the 
programme, is availablehere 
. 
You can alsoregister to our mailing list if 
you want to be updated about Eradiate in the future.


Kind regards,

Yves Govaerts, for the Eradiate Team





Re: Failing tests

2021-09-22 Thread Patrick Eriksson

Richard, Oliver,

Thanks for your clarifications. My calculations seem to work now.

Bye,

Patrick

On 2021-09-22 09:56, Richard Larsson wrote:
The 2.5 way of absorption lookup table calculations is being 
redesigned.  For now you need to manually define the agendas as you do.  
Add lines will be removed, at some point, from the xsec code.  The 
reason it's deprecated is that in normal calculations you should be 
putting the line calculations into the propagation matrix agenda.  
Lookup calculations are special here since they just do partial 
calculations.


//Richard


On Wed, Sep 22, 2021, 08:30 Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>> wrote:


Hi again,

Seems that I have found the reason to some of the failing tests "the
hard way". After spending time on some other failing calculations, I
have figured out that the default in ARTS gives absorption
lookup-tables
that miss all lines. For example, TestOdinSMR_1D uses

Copy(abs_xsec_agenda, abs_xsec_agenda__noCIA)

This agenda is defined as

AgendaSet( abs_xsec_agenda__noCIA ){
    abs_xsec_per_speciesInit
    abs_xsec_per_speciesAddConts
}

No inclusion of lines! Another Odin/SMR test uses

AgendaSet( abs_xsec_agenda ) {
    abs_xsec_per_speciesInit
    abs_xsec_per_speciesAddConts
    abs_xsec_per_speciesAddLines
}

and this works. Both tests generates abs tables.

I get a message that abs_xsec_per_speciesAddLines is deprecated. But
why
removed from the defaults for abs_xsec_agenda before the alternative is
in place?

Anyhow, how shall abs_xsec_agenda be defined to get correct abs tables
in v2.5?

Bye,

Patrick





 Forwarded Message 
Subject: Failing tests
Date: Tue, 21 Sep 2021 17:55:33 +0200
    From: Patrick Eriksson mailto:patrick.eriks...@chalmers.se>>
To: ARTS Development List mailto:arts_dev.mi@lists.uni-hamburg.de>>

Hi all,

I have spent some time on trying to figure out how the changes in my
branch could have created some failing tests. But just run make
check-all with master and the same tests failed also there, so there
seem to be older issues.

The failed tests listed below. These issues under control? Noticed that
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which
can be OK. On the other hand, there were also tests failing with
deviations of 30-100 K.

Bye,

Patrick



The following tests FAILED:
          42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch
(Failed)
          43 -
python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
         152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D
(Failed)
         153 -
python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
         156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
         157 -
python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
         158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
         159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM
(Failed)
         180 -
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios
(Failed)
         181 -
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios

(Failed)
         184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA
(Failed)
         185 -
python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
         232 - arts.pyarts.pytest (Failed)






Fwd: Failing tests

2021-09-22 Thread Patrick Eriksson

Hi again,

Seems that I have found the reason to some of the failing tests "the 
hard way". After spending time on some other failing calculations, I 
have figured out that the default in ARTS gives absorption lookup-tables 
that miss all lines. For example, TestOdinSMR_1D uses


Copy(abs_xsec_agenda, abs_xsec_agenda__noCIA)

This agenda is defined as

AgendaSet( abs_xsec_agenda__noCIA ){
  abs_xsec_per_speciesInit
  abs_xsec_per_speciesAddConts
}

No inclusion of lines! Another Odin/SMR test uses

AgendaSet( abs_xsec_agenda ) {
  abs_xsec_per_speciesInit
  abs_xsec_per_speciesAddConts
  abs_xsec_per_speciesAddLines
}

and this works. Both tests generates abs tables.

I get a message that abs_xsec_per_speciesAddLines is deprecated. But why 
removed from the defaults for abs_xsec_agenda before the alternative is 
in place?


Anyhow, how shall abs_xsec_agenda be defined to get correct abs tables 
in v2.5?


Bye,

Patrick





 Forwarded Message 
Subject: Failing tests
Date: Tue, 21 Sep 2021 17:55:33 +0200
From: Patrick Eriksson 
To: ARTS Development List 

Hi all,

I have spent some time on trying to figure out how the changes in my 
branch could have created some failing tests. But just run make 
check-all with master and the same tests failed also there, so there 
seem to be older issues.


The failed tests listed below. These issues under control? Noticed that 
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which 
can be OK. On the other hand, there were also tests failing with 
deviations of 30-100 K.


Bye,

Patrick



The following tests FAILED:
 42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
 43 - python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch 
(Failed)
152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
153 - python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D 
(Failed)
156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
157 - python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
	180 - 
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)
	181 - 
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)

184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
185 - python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA 
(Failed)
232 - arts.pyarts.pytest (Failed)






Failing tests

2021-09-21 Thread Patrick Eriksson

Hi all,

I have spent some time on trying to figure out how the changes in my 
branch could have created some failing tests. But just run make 
check-all with master and the same tests failed also there, so there 
seem to be older issues.


The failed tests listed below. These issues under control? Noticed that 
some tests demand an accuracy of 2nK! The deviation was 2.5 mK, which 
can be OK. On the other hand, there were also tests failing with 
deviations of 30-100 K.


Bye,

Patrick



The following tests FAILED:
 42 - arts.ctlfile.slow.artscomponents.clearsky.TestBatch (Failed)
 43 - python.arts.ctlfile.slow.artscomponents.clearsky.TestBatch 
(Failed)
152 - arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D (Failed)
153 - python.arts.ctlfile.slow.instruments.odinsmr.TestOdinSMR_1D 
(Failed)
156 - arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
157 - python.arts.ctlfile.slow.instruments.hirs.TestHIRS_fast (Failed)
158 - arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
159 - python.arts.ctlfile.slow.instruments.metmm.TestMetMM (Failed)
	180 - 
arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)
	181 - 
python.arts.ctlfile.xmldata.artscomponents.arts-xml-data.TestPlanetIsoRatios 
(Failed)

184 - arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA (Failed)
185 - python.arts.ctlfile.xmldata.artscomponents.cia.TestRTwithCIA 
(Failed)
232 - arts.pyarts.pytest (Failed)






Re: ReadHITRAN

2021-09-20 Thread Patrick Eriksson

Stefan,

Yes, that sounds reasonable. I simply lagging behind the development and 
need to catch up in how we do things. I emailed Richard and Freddy on 
the side about some stuff.


But when you brought it up, is there a documentation on the replacement 
mechanism? In the email to R I also suggested a README in Artscat, to 
clarify the content in the folder.


Bye,

Patrick



On 2021-09-20 11:10, Stefan Buehler wrote:

Dear Patrick,

I think we should put ARTS’ own line catalog in the center wherever possible 
(which is based on converted current HITRAN). Use it, if you are happy with the 
parameters there. If you want other parameters, and there is a good reason for 
that, consider updating it. We have a mechanism to replace individual 
parameters there (and document those substitutions).

Stefan

On 20 Sep 2021, at 10:46, Patrick Eriksson wrote:


Richard,

Thanks for additional information. Seems that the take home message is that I 
should look at other ways to set up the calculations. I just picked up an old 
cfile, used that as a starting point and did not even consider alternatives to 
use ReadHITRAN.

Bye,

Patrick

On 2021-09-20 09:05, Richard Larsson wrote:

Hi Patrick,

We can of course optimize the reading routine but there's no point in doing 
that.  The methods that read external catalogs should only ever be used once 
per update of the external catalog, so it's fine if they are slow but not too 
slow.

New memory is allocated for every absorption line always.  This is because we 
keep line data local, and the model for the line shape and the local quantum 
numbers don't have to be known at compile-time.

Additionally, the line data is pushed into arrays, so they will double in size 
every time you reach the current size.

If we knew the number of lines and broadening species and local quantum 
numbers, then these allocations happen once for the entire band, but we don't 
in ReadHITRAN or any of the external reading routines.  So you will have 
many-many system calls asking for more memory.  This of course also means that 
you are over-allocating memory since that's how Arrays work in ARTS (because 
that's standard C++).  Again, this is also fine since the external catalog when 
read again will allocate only exactly what is required.

With hope,
--Richard

Den mån 20 sep. 2021 kl 08:09 skrev Patrick Eriksson mailto:patrick.eriks...@chalmers.se>> :

 Richard,

 Thanks for the clarification.

 Is the allocation of more memory done in fixed chunks? Or something
 "smart" in the process? If the former and the chunks are too small,
 then
 maybe I am doing a lot of reallocations. My impression was that memory
 usage increased quite monotonically, not in noticeable steps.

 If the lines have to be sorted into bands, then the complexity of the
 reading will increase in line with what I have noticed. And likely not
 much to do about it.

 Bye,

 Patrick



  > There are two possible slowdowns there could be still. One is
 that you
  > hit some line count where you need to reallocate the array of lines
  > because you have too many. The other is that the search for
 placing the
  > line in the correct band is slow when there are more bands to
 look through.
  >
  > The former would be just pure bad luck, so there's nothing to do
 about it.
  >
  > I would suspect the latter is your problem.  You need to search
 through
  > the existing bands for every new line to find where it belongs. 
Since
  > bands are often clustered closely together in frequency, this
 could slow
  > down the reading as you get more and more bands. A smaller frequency
  > range means fewer bands to look through.
  >
  > //Richard
  >
  > On Sun, Sep 19, 2021, 22:39 Patrick Eriksson
  > mailto:patrick.eriks...@chalmers.se>
 <mailto:patrick.eriks...@chalmers.se
 <mailto:patrick.eriks...@chalmers.se>>> wrote:
  >
  >     Richard,
  >
  >      > It's expected to take a somewhat arbitrary time.  It reads
 ASCII.
  >
  >     I have tried multiple times and the pattern is not changing.
  >
  >
  >      > The start-up time is going to be large because of having
 to find the
  >      > first frequency, which means you have to parse the text
 nonetheless.
  >
  >     Understood. But that overhead seems to be relatively small.
 In my test,
  >     it seemed to take 4-7 s to reach the first frequency. Anyhow,
 this goes
  >     in the other direction. To minimise the parsing to reach the
 first
  >     frequency, it should be better to read all in one go, and not
 in parts
  >     (which is the case for me).
  >
  >     Bye,
  >
  >     Patrick
  >



Re: ReadHITRAN

2021-09-20 Thread Patrick Eriksson

Richard,

Thanks for additional information. Seems that the take home message is 
that I should look at other ways to set up the calculations. I just 
picked up an old cfile, used that as a starting point and did not even 
consider alternatives to use ReadHITRAN.


Bye,

Patrick

On 2021-09-20 09:05, Richard Larsson wrote:

Hi Patrick,

We can of course optimize the reading routine but there's no point in 
doing that.  The methods that read external catalogs should only ever be 
used once per update of the external catalog, so it's fine if they are 
slow but not too slow.


New memory is allocated for every absorption line always.  This is 
because we keep line data local, and the model for the line shape and 
the local quantum numbers don't have to be known at compile-time.


Additionally, the line data is pushed into arrays, so they will double 
in size every time you reach the current size.


If we knew the number of lines and broadening species and local quantum 
numbers, then these allocations happen once for the entire band, but we 
don't in ReadHITRAN or any of the external reading routines.  So you 
will have many-many system calls asking for more memory.  This of course 
also means that you are over-allocating memory since that's how Arrays 
work in ARTS (because that's standard C++).  Again, this is also fine 
since the external catalog when read again will allocate only exactly 
what is required.


With hope,
--Richard

Den mån 20 sep. 2021 kl 08:09 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Richard,

Thanks for the clarification.

Is the allocation of more memory done in fixed chunks? Or something
"smart" in the process? If the former and the chunks are too small,
then
maybe I am doing a lot of reallocations. My impression was that memory
usage increased quite monotonically, not in noticeable steps.

If the lines have to be sorted into bands, then the complexity of the
reading will increase in line with what I have noticed. And likely not
much to do about it.

Bye,

Patrick



 > There are two possible slowdowns there could be still. One is
that you
 > hit some line count where you need to reallocate the array of lines
 > because you have too many. The other is that the search for
placing the
 > line in the correct band is slow when there are more bands to
look through.
 >
 > The former would be just pure bad luck, so there's nothing to do
about it.
 >
 > I would suspect the latter is your problem.  You need to search
through
 > the existing bands for every new line to find where it belongs. 
Since

 > bands are often clustered closely together in frequency, this
could slow
 > down the reading as you get more and more bands. A smaller frequency
 > range means fewer bands to look through.
 >
 > //Richard
 >
 > On Sun, Sep 19, 2021, 22:39 Patrick Eriksson
 > mailto:patrick.eriks...@chalmers.se>
<mailto:patrick.eriks...@chalmers.se
<mailto:patrick.eriks...@chalmers.se>>> wrote:
 >
 >     Richard,
 >
 >      > It's expected to take a somewhat arbitrary time.  It reads
ASCII.
 >
 >     I have tried multiple times and the pattern is not changing.
 >
 >
 >      > The start-up time is going to be large because of having
to find the
 >      > first frequency, which means you have to parse the text
nonetheless.
 >
 >     Understood. But that overhead seems to be relatively small.
In my test,
 >     it seemed to take 4-7 s to reach the first frequency. Anyhow,
this goes
 >     in the other direction. To minimise the parsing to reach the
first
 >     frequency, it should be better to read all in one go, and not
in parts
 >     (which is the case for me).
 >
 >     Bye,
 >
 >     Patrick
 >



Re: ReadHITRAN

2021-09-20 Thread Patrick Eriksson

Richard,

Thanks for the clarification.

Is the allocation of more memory done in fixed chunks? Or something 
"smart" in the process? If the former and the chunks are too small, then 
maybe I am doing a lot of reallocations. My impression was that memory 
usage increased quite monotonically, not in noticeable steps.


If the lines have to be sorted into bands, then the complexity of the 
reading will increase in line with what I have noticed. And likely not 
much to do about it.


Bye,

Patrick



There are two possible slowdowns there could be still. One is that you 
hit some line count where you need to reallocate the array of lines 
because you have too many. The other is that the search for placing the 
line in the correct band is slow when there are more bands to look through.


The former would be just pure bad luck, so there's nothing to do about it.

I would suspect the latter is your problem.  You need to search through 
the existing bands for every new line to find where it belongs.  Since 
bands are often clustered closely together in frequency, this could slow 
down the reading as you get more and more bands. A smaller frequency 
range means fewer bands to look through.


//Richard

On Sun, Sep 19, 2021, 22:39 Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>> wrote:


Richard,

 > It's expected to take a somewhat arbitrary time.  It reads ASCII.

I have tried multiple times and the pattern is not changing.


 > The start-up time is going to be large because of having to find the
 > first frequency, which means you have to parse the text nonetheless.

Understood. But that overhead seems to be relatively small. In my test,
it seemed to take 4-7 s to reach the first frequency. Anyhow, this goes
in the other direction. To minimise the parsing to reach the first
frequency, it should be better to read all in one go, and not in parts
(which is the case for me).

Bye,

Patrick



Re: ReadHITRAN

2021-09-19 Thread Patrick Eriksson

Richard,


It's expected to take a somewhat arbitrary time.  It reads ASCII.


I have tried multiple times and the pattern is not changing.


The start-up time is going to be large because of having to find the 
first frequency, which means you have to parse the text nonetheless.


Understood. But that overhead seems to be relatively small. In my test, 
it seemed to take 4-7 s to reach the first frequency. Anyhow, this goes 
in the other direction. To minimise the parsing to reach the first 
frequency, it should be better to read all in one go, and not in parts 
(which is the case for me).


Bye,

Patrick


ReadHITRAN

2021-09-19 Thread Patrick Eriksson

Hi all,

I have noticed that the time used by ReadHITRAN is not linear with the 
width of the frequency range. For example, to read all lines (of five 
main species) between 800 and 840 cm-1 used 90 s, while reading 800-820 
and 820-840 cm-1) together used 57 s.


Is this expected?

(The above uses 1-2% of my RAM).

Bye,

Patrick


Re: VMRs

2021-09-16 Thread Patrick Eriksson

Stefan,


For HSE it is up to the user to apply this "fine tuning" or not. This including 
to include adding call of the HSE method in OEM iterations, to make sure that HSE is 
maintained after an iteration. The VMR rescaling should also be included in the iteration 
agenda, if the retrieval can change H2O close to the ground. That is, a VMR rescaling 
would not be something completely new, as I see it.


It seems to me that this leads into a logical loop: If you retrieve H2O and O3, 
and the retrieved H2O value directly affects the O3 value due to the rescaling. 
As you write, in principle, this should even be in the Jacobian, as a 
cross-term. With more water, the lines of all other gases get weaker.

It is true that if there is more of the one there has to be less of the other, 
but argh, this is so ugly.

Perhaps the deeper reason why AER went for the other definition? If VMRs refer 
to the dry pressure, and the dry gases are all either quite constant or very 
rare, then retrievals are more independent.


To switch to the other definition, than the VMR of e.g. N2 would stay 
the same in a retrieval of H2O. This is why I initially found this 
option nice. But it would not change the physics and the 
cross-dependences between species would not disappear. You have to 
remember that VMR is a relative measure. To get the absolute amount of 
the species, you still need to calculate the partial pressures. That is 
you need to "distribute" the total pressure among the gases, and as I 
understand it a general expression for this would be:


p_i = VMR_i * p / VMR_sum

where p_i is partial pressure of species i, VMR_i its VMR, p pressure 
and VMR_sum the sum of all VMRs.


Our present definition is based on that VMR_sum=1, while in the 
alternative version it will deviate, and with more H2O VMR_sum will 
increase which will affect p_i even if VMR_i is unchanged.


Or do I miss something?

Bye,

Patrick


Re: VMRs

2021-09-16 Thread Patrick Eriksson

Hi again,

Great that we agree on the problem. OK, let's keep the present 
definition of VMR (that it refers to sum of all gases, not just 
"constant" ones).


We should then for sure introduce a rescaling method (or maybe several). 
I expressed myself poorly, I rather meant that introducing such a method 
is not a fully complete solution, if we consider the "fine print". What 
I had in mind is the Jacobian, the coupling between variable and 
constant gases should theoretically go into the expressions for the 
Jacobian. But that's just a "smart" comment. I don't say that it should 
be implemented, which would be a pain. Then Stuart's comment is more 
relevant, this could have consequences for the values given to 
absorption models.


To make the rescaling method easy to apply, I would suggest to make one 
specific for Earth, that automatically base the rescaling on H2O. There 
could be a generic one as well.


Yes, this puts some weight on the user. Hydrostatic equilibrium (HSE) is 
a similar case. Input profiles do not always fulfil HSE (this is the 
case for Fascod, if not a mater of geopotential vs geometric 
altitudes?). For HSE it is up to the user to apply this "fine tuning" or 
not. This including to include adding call of the HSE method in OEM 
iterations, to make sure that HSE is maintained after an iteration. The 
VMR rescaling should also be included in the iteration agenda, if the 
retrieval can change H2O close to the ground. That is, a VMR rescaling 
would not be something completely new, as I see it.


Bye,

Patrick


On 2021-09-16 15:01, Stefan Buehler wrote:

Hej,


With our present definition of VMRs, we agree on that having 78% N2, 21% O2 and 
e.g. 3% H2O is unphysical? That with a lot of H2O (or any other non-fixed gas) 
the standard values of the fixed gases should be scaled downwards. In the 
example above, with 0.97. Do you agree?


Yes, I agree.


It seems a bit weird to me to use this definition at the (low) level of the 
absorption routines. Perhaps one solutions would be to have an option for this 
behaviour when ingesting concentration profile data? Perhaps by passing in a 
list of species that should be considered as not adding to the denominator for 
the VMR definition.


If we agree on the above, then this is the simplest (but not most theoretically 
correct) solution.


Why not correct?

/Stefan



Re: VMRs

2021-09-16 Thread Patrick Eriksson

Hej,

No time for writing a lot. Right now just want to make a basic check of 
our understanding.


With our present definition of VMRs, we agree on that having 78% N2, 21% 
O2 and e.g. 3% H2O is unphysical? That with a lot of H2O (or any other 
non-fixed gas) the standard values of the fixed gases should be scaled 
downwards. In the example above, with 0.97. Do you agree?



It seems a bit weird to me to use this definition at the (low) level of the 
absorption routines. Perhaps one solutions would be to have an option for this 
behaviour when ingesting concentration profile data? Perhaps by passing in a 
list of species that should be considered as not adding to the denominator for 
the VMR definition.


If we agree on the above, then this is the simplest (but not most 
theoretically correct) solution.


Bye,

Patrick









Note that for once the special thing about water is here not the fact that it’s 
condensible, I think, but just that there is so much of it, and at the same 
time very variable. Other gas species have also very variable concentrations, 
but it doesn’t matter for the total pressure.

All the best,

Stefan

On 15 Sep 2021, at 20:19, Patrick Eriksson wrote:


Stefan,

Neither I had considered this definition of VMR. But would it not make sense to 
follow it? Then a statement that the atmosphere contains 20.95% oxygen makes 
more sense. You yourself pointed at that it would make sense to scale N2 and O2 
for low humid altitudes, where the amount of water can be several %. In code 
preparing data for ARTS I normally do this adjustment. Should be more correct!?

A problem is to define what is the wet species when we go to other planets. Or 
maybe there are even planets with several wet species?

That is, I would be in favour to define VMR with respect to dry air, if we can 
find a manner to handle other planets.

Bye,

Patrick



On 2021-09-15 18:27, Stefan Buehler wrote:

Dear all,

Eli Mlawer brought up an interesting point in some other context:


we recently had a LBLRTM user get confused on our vmr, which is amount_of_gas / 
amount_of_dry_air. They weren’t sure that dry air was the denominator instead 
of total air.  I’m too lazy to look at the link above that @Robert Pincus 
provided, but I hope it is has dry air in the denominator.  So much easier to 
simply specify evenly mixed gases, such as 400 ppm CO2 (and, 20 years from now, 
500 ppm CO2).


I’ve never considered that one could define it this way. Perhaps this 
convention explains, why VMRs in climatologies like FASCOD add up so poorly to 
1.

I’m not suggesting that we change our behaviour, but want to make you aware 
that this convention is in use. (Or perhaps you already were, and just I missed 
it.)

All the best,

Stefan



Re: VMRs

2021-09-15 Thread Patrick Eriksson

Stefan,

Neither I had considered this definition of VMR. But would it not make 
sense to follow it? Then a statement that the atmosphere contains 20.95% 
oxygen makes more sense. You yourself pointed at that it would make 
sense to scale N2 and O2 for low humid altitudes, where the amount of 
water can be several %. In code preparing data for ARTS I normally do 
this adjustment. Should be more correct!?


A problem is to define what is the wet species when we go to other 
planets. Or maybe there are even planets with several wet species?


That is, I would be in favour to define VMR with respect to dry air, if 
we can find a manner to handle other planets.


Bye,

Patrick



On 2021-09-15 18:27, Stefan Buehler wrote:

Dear all,

Eli Mlawer brought up an interesting point in some other context:


we recently had a LBLRTM user get confused on our vmr, which is amount_of_gas / 
amount_of_dry_air. They weren’t sure that dry air was the denominator instead 
of total air.  I’m too lazy to look at the link above that @Robert Pincus 
provided, but I hope it is has dry air in the denominator.  So much easier to 
simply specify evenly mixed gases, such as 400 ppm CO2 (and, 20 years from now, 
500 ppm CO2).


I’ve never considered that one could define it this way. Perhaps this 
convention explains, why VMRs in climatologies like FASCOD add up so poorly to 
1.

I’m not suggesting that we change our behaviour, but want to make you aware 
that this convention is in use. (Or perhaps you already were, and just I missed 
it.)

All the best,

Stefan



Re: [arts-dev] 20 Years of ARTS Development

2020-03-11 Thread Patrick Eriksson

Hi all,

And I take the opportunity to thank all that have contributed to ARTS during these first 
20 years! This with a special thanks to Oliver that has kept a watchful eye on ARTS from 
day one.


Cheers,

Patrick



On 2020-03-11 14:59, Oliver Lemke wrote:

Hi all,

20 years ago, on March 11 in 2000, ARTS was born and development started. As a little 
celebration, I put together an animation to compress these 20 years into a 3 minute video:


https://youtu.be/rGQDuLs2-5c

Looking forward to the next 20 years. :-)

Have fun,
Oliver


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Fwd: Clouds in ARTS

2019-11-05 Thread Patrick Eriksson

Dear Frank Werner,

It makes me happy to hear that your are integrating ARTS into your code 
base. When we started ARTS, limb sounding was one of the main 
applications so it is very nice if ARTS gets used on limb sounders 
beside Odin/SMR.


Let me start by asking if you are using v2.2 or a relatively recent v2.3?

If v2.2: Then you have to create the "pnd_field" yourself and import 
data with e.g. ParticleTypeAdd.


If v2.3: In this version you can work with particle size distributions 
(PSDs). Be aware that there was a first system, that now is replaced. 
The later version operates with particle_bulkprop_field. With this 
system you can give ARTS IWC-values and select some PSDs, such as the 
MH97 one that both Dong Wu and I have used for limb retrievals.


In both cases, scattering data you either generate inside ARTS with 
T-matrix or take it from our "scattering database".


Some brief comments. If you tell me what version you actually are using, 
I can provide more detailed help.


Bye,

Patrick


On 2019-11-04 22:07, Claudia Emde wrote:

Dear Arts-Developers,

here is a question about how to include clouds in ARTS. Since I am not 
up-to-date, I forward this message to you.


Best regards,
Claudia


 Forwarded Message 
Subject:Clouds in ARTS
Date:   Mon, 4 Nov 2019 17:40:47 +
From:   Werner, Frank (329D) 
To: claudia.e...@lmu.de 



Hi Claudia,

The MLS satellite team here at JPL has recently started using ARTS, in 
addition to the in house radiative transfer algorithms. Michael Schwartz 
and I have been the two people playing around with ARTS, trying to 
incorporate it as another RT option in our code base. We are almost at 
the point where we have ARTS as another plug-and-play option for our 
retrievals.


One of the last remaining issues is handling of clouds. As far as I can 
tell, all I have to do is turn the ‘cloudbox’ on and add hydro meteors 
via ‘ParticleTypeAdd’. Is there a simple example for some cloud 
absorption you can send me? It doesn’t need to be super realistic or 
anything. As far as I can tell, the workspace method needs scattering 
properties and number densities. All I could find in the standard ARTS 
data sets is the Chevallier_91L stuff in 
‘/controlfiles/planets/Earth/Chevallier_91L/’.


Again, a simple example of some cloud absorption would be appreciated. 
Thanks for your help!


Best wishes,

Frank

--

Frank Werner
Mail Stop 183-701, Jet Propulsion Laboratory
4800 Oak Grove Drive, Pasadena, California 91109, United States
Phone: +1 818 354-1918

Fax: +1 818 393 5065


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Save the date: ARTS workshop June 2020

2019-10-03 Thread Patrick Eriksson

Dear ARTS friends,

It's time for a new ARTS workshop. The workshop will be similar to the 
old ones, but this time we have also something to celebrate. The ARTS 
project is approaching an age of 20 years! And if all goes well, we will 
announce ARTS-3 some time before the workshop.


The workshop will be held June 8-11, 2020. The venue will again be 
Kristineberg Marine Research Station, on the west coast of Sweden. You 
will need to be in Gothenburg around 14.00 June 8, and be back in 
Gothenburg around 15.00 June 11.


Mark this time period in your calendar. The invitation will be sent out 
in January.


If you are not familiar with these workshops, see:
http://www.radiativetransfer.org/events

(We are aware of that the IPWG and IWSSM workshops just were announced 
to be June 1-5. This is unlucky but we can not move he ARTS workshop as 
Kristeneberg is fully booked.)


Kind regards,

Stefan and Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Adding new PSDs to ARTS

2019-09-27 Thread Patrick Eriksson

Stuart,

Good that you emailed. There is a old and new system for PSDs. 
pnd_fieldCalcFromscat_speciesFields uses the old system. We have now 
decided that the old system will be removed. Has not yet happen due to 
lack of time. The new system supports retrievals, in contrast to the old 
one.


There could be a lack of a demo case for the new system.
The new PSDs should all be found in m_psd.cc. The new system uses 
pnd_fieldCalcFromParticleBulkProps


This is all I have time to write now. I can try to explain more 
carefully on Monday. I can also send you example cfiles.


Bye,

Patrick


On 2019-09-27 17:25, Fox, Stuart wrote:

Hi all,

I would like to add some further PSD options to ARTS (specifically ones 
for rain and graupel that are consistent with the single-moment schemes 
used in the Met Office NWP model). I will be making use of them by 
defining hydrometeor mass density fields and then using 
pnd_fieldCalcFromscat_speciesFields.


However, I’m a little bit confused as to the correct way to implement 
these (and there appears to be incomplete implementations of some of the 
existing options e.g. Abel & Boutle 2012 for rain, which happens to be 
one of the ones I’d like to use). The guidance in the arts developer 
guide also doesn’t seem to follow what’s actually in the code for some 
cases.


So far I have:

-added logic to pnd_fieldCalcFromscat_speciesFields to call pnd_fieldXXX 
for each of the new parametrizations


-updated the documentation for pnd_fieldCalcFromscat_speciesFields in 
methods.cc to include the new parametrizations


-added new pnd_fieldXXX function to microphysics.cc (not cloudbox.cc as 
suggested by the developer guide) to calculate the pnd field according 
to the raw PSD function psd_XXX and the scattering meta-data (and added 
this to microphysics.h as well)


-added new “raw” psd calculation psd_XXX to psd.cc

This seems to be all that is required to make my use-case work, but I 
can see that it is not quite complete. In particular, I believe that I 
should add a new workspace method dNdD_XXX to allow a direct calculation 
of the raw PSD. Should this go in m_microphysics.cc (and be added to 
methods.cc)? This seems to be where other ones are, but again the 
developer guide suggests it should be in m_cloudbox.cc.


What is the purpose of the psd_XXX functions in m_psd.cc? Are these also 
required?


Thanks for your help,

Stuart

Dr Stuart Fox  Radiation Research Manager

*Met Office*FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
Tel: +44 (0)330 135 2480  Fax: +44 (0)1392 885681
Email: stuart@metoffice.gov.uk  Website: www.metoffice.gov.uk
See our guide to climate change at 
http://www.metoffice.gov.uk/climate-change/guide/



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] About error checking

2019-03-26 Thread Patrick Eriksson

Richard,

Now you are a bit all over the place. Yes of course, things can be 
handled in a more nice way if we introduce new features.


ARTS is almost 20 years! When we started ARTS the aim was in fact to use 
as few "groups" as possible. And I would say that we kept that rule a 
long time, but lately things have changed. You have added some new 
groups during the last years and OEM resulted in some other new ones. 
Before we were picky about that each group could be imported and 
exported to files, and could be printed to screen. I don't know if this 
is true for all newer groups.


I don't say that the new groups are bad. For sure we needed a special 
group for covariance matrices, as example. But as usual I would prefer 
that we have a clear strategy for the future. And there should be 
documentation.


I am prepared to discuss this, but not by email. It just takes too much 
time, and these email discussions tend to become a moving target. But I 
could try to find a time for a video/tele conf if there is a general 
feeling that we should add more groups now or in a close future.


Bye,

Patrick






On 2019-03-26 11:51, Richard Larsson wrote:

Hi Patrick,



Den mån 25 mars 2019 kl 19:47 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Hi Richard,

I can agree  on that this is not always critical for efficiency as long
as the check is a simple comparison. But some checks are much more
demanding. For example, the altitudes in z_field should be strictly
increasing. If you have a large 3D atmosphere, it will be very
costly to
repeat this check for every single ppath calculation. And should
this be
checked also in other places where z_field is used? For example, if you
use iyIndependentBeamApproximation you will repeat the check as also
the
DISORT and RT4 methods should check this, as they can be called without
providing a ppath.


If a bad z_field can cause an assert today, then it has to be checked 
every-time it is accessed.


This problem seems simply to be a quick and somewhat bad original 
design (hindsight is 20/20, and all that).  To start with, if it has to 
be structured, then z_field is not a field.  It is as much a grid as 
pressure, so the name needs to change.


And since we have so many grids that demands a certain structure, i.e., 
increasing or decreasing values along some axis but perhaps not all, 
then why are these Tensors and Vectors that are inherently 
unstructured?  They could be classes of some Grid or StructuredGrid 
types.  You can easily design a test in such a class that makes sure the 
structure is good after every access that can change a value.  Some 
special access functions, like logspace and linspace, and 
HSE-regridding, might have to added to not trigger the check at a bad 
time, but not many.


Since, I presume, iyIndependentBeamApproximation only takes "const 
Tensor3& z_field" at this point, the current z_field cannot change its 
values inside the function.  However, since it is possible that the 
z_field in iyIndependentBeamApproximation is not the same as the z_field 
when ppath was generated, the size of z_field and ppath both has to 
checked in iyIndependentBeamApproximation and other iy-functions.


However, to repeat: If a bad z_field can cause an assert today, then it 
has to be checked every-time it is accessed.



Further, I don't follow what strategy you propose. The discussion
around
planck indicated that you wanted the checks as far down as possible.
But
the last email seems to indicate that you also want checks higher up,
e.g. before entering interpolation. I assume we don't want checks on
every level. So we need to be clear about at what level the checks
shall
be placed. If not, everybody will be lazy and hope that a check
somewhere else catches the problem.


There were asserts in the physics_funcs.cc functions.  Asserts that were 
triggered.  So I changed them to throw-catch.


I am simply saying that every function needs to be sure it cannot 
trigger any asserts.  Using some global magical Index is not enough to 
ensure that.


A Numeric that is not allowed to be outside a certain domain is a 
runtime or domain error and not an assert.  You either throw such errors 
in physics_funcs.cc, you make every function that takes t_field and 
rtp_temperature check that they are correct, or you create a special 
class just for temperature that enforces a positive value.  The first is 
easier.



In any case, it should be easier to provide informative error messages
if problems are identified early on. That is, easier to pinpoint the
reason to the problem.


I agree, but not by the magic that is *_checkedCalc, since it does not 
guarantee a single thing once in another function.

With hope,
//Richard

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
http

Re: [arts-dev] About error checking

2019-03-25 Thread Patrick Eriksson

Hi Richard,

I can agree  on that this is not always critical for efficiency as long 
as the check is a simple comparison. But some checks are much more 
demanding. For example, the altitudes in z_field should be strictly 
increasing. If you have a large 3D atmosphere, it will be very costly to 
repeat this check for every single ppath calculation. And should this be 
checked also in other places where z_field is used? For example, if you 
use iyIndependentBeamApproximation you will repeat the check as also the 
DISORT and RT4 methods should check this, as they can be called without 
providing a ppath.


Further, I don't follow what strategy you propose. The discussion around 
planck indicated that you wanted the checks as far down as possible. But 
the last email seems to indicate that you also want checks higher up, 
e.g. before entering interpolation. I assume we don't want checks on 
every level. So we need to be clear about at what level the checks shall 
be placed. If not, everybody will be lazy and hope that a check 
somewhere else catches the problem.


In any case, it should be easier to provide informative error messages 
if problems are identified early on. That is, easier to pinpoint the 
reason to the problem.


Bye,

Patrick



On 2019-03-25 12:24, Richard Larsson wrote:

Hi Patrick,

Just some quick points.

Den sön 24 mars 2019 kl 10:29 skrev Patrick Eriksson 
mailto:patrick.eriks...@chalmers.se>>:


Hi Richard,

A great initiative. How errors are thrown can for sure be improved. We
are both lacking such checks (still to many cases where an assert shows
up instead on a proper error message), and they errors are probably
implemented inconsistently.

When it comes to use try/catch, I leave the discussion to others.


But I must bring up another aspect here, on what level to apply asserts
and errors. My view is that we have decided that such basic
functions as
planck should only contain asserts. For efficiency reasons.


Two things.

First, Oliver tested the speeds here.  The results are random in 
physics_funcs.cc:


number_density (100 million calls, averaged over 5 runs):

with assert:    0.484s
with try/catch: 0.502s, 3.8% slower than assert
no checks:      0.437s, 9.8% faster than assert

dinvplanckdI (20 million calls, averaged over 5 runs):

with assert:    0.576s
with try/catch: 0.563s, 2.3% faster than assert
no checks:      0.561s, 2.7% faster than assert

but with no notable differences.  (We are not spending any of our time 
in these functions really, so +-10% is nothing.)  One thing that asserts 
do that are nicer that they are completely gone when NDEBUG is set.  We 
might therefore want to wrap the deeper function-calls in something that 
removes these errors from the compilers view.  We have the 
DEBUG_ONLY-environments for that, but a negative temperature is not a 
debug-thing.  I suggested to Oliver we introduce a flag that allows us 
to remove some parts or all parts of the error-checking code on the 
behest of the user.  I do not know what to name said flag so the code is 
readable.  "IGNORABLE()" in ARTS and "-DIGNORE_ERRORS=1" in cmake to set 
the flag that everything in the previous parenthesis is not passed to 
the compiler?  This could be used to generate your 'faster' code but 
errors would just be completely ignored; of course, users would have to 
be warned that any OS error or memory error could still follow...


The second point I have is that I really do not see the points of the 
asserts at all.  Had they allowed the compiler to make guesses, that 
would be somewhat nice.  But in practice, they just barely indicate what 
the issues are by comparing some numbers or indices before terminating a 
program.  They don't offer any solutions, and they should really never 
ever occur.  I would simply ban them from use in ARTS, switch to throws, 
and allow the user to tell the compiler to allow building a properly 
non-debug-able version of ARTS where all errors are ignored as above.



For a pure forward model run, a negative frequency or temperature would
come from f_grid and t_field, respectively. We decided to introduce
special check methods, such as atmfields_checkedCalc, to e.g. catch
negative temperatures in input.


I think it would be better if we simply removed the *_checkedCalc 
functions entirely (as a demand for executing code; they are still good 
for sanity checkups).  I think they mess up the logic of many things.  
Agendas that work use these outputs when they don't need them, and the 
methods have to manually check the input anyways because you cannot 
allow segfaults.  It is not the agendas that need these checks.  It is 
the methods calling these agendas.  And they only need to checks for 
ensuring they have understood what they want to do.  And even if the 
checked value is positive when you reach a function, you cannot say in 
that method if the check was for the d

Re: [arts-dev] About error checking

2019-03-24 Thread Patrick Eriksson

Hi Richard,

A great initiative. How errors are thrown can for sure be improved. We 
are both lacking such checks (still to many cases where an assert shows 
up instead on a proper error message), and they errors are probably 
implemented inconsistently.


When it comes to use try/catch, I leave the discussion to others.


But I must bring up another aspect here, on what level to apply asserts 
and errors. My view is that we have decided that such basic functions as 
planck should only contain asserts. For efficiency reasons.


For a pure forward model run, a negative frequency or temperature would 
come from f_grid and t_field, respectively. We decided to introduce 
special check methods, such as atmfields_checkedCalc, to e.g. catch 
negative temperatures in input.


When doing OEM, negative temperatures can pop up after each iteration 
and this should be checked. But not by planck, this should happen on a 
higher level.


A simple solution here is to include a call of atmfields_checkedCalc 
etc. in inversion_iterate_agenda. The drawback is that some data will be 
checked over and over again despite not being changed.


So it could be discussed if checks should be added to the OEM part. That 
data changed in an iteration, should be checked for unphysical values.



That is, I think there are more things to discuss than you bring up in 
your email. So don't start anything big before we have reached a common 
view here.


Bye,

Patrick


On 2019-03-22 16:34, Richard Larsson wrote:

Hi all,

I have kept running into problem with errors in ARTS produced by bad 
input for OEM.  Asserts are and not exceptions terminate the program in 
several cases.


I just made a small update to turn several errors affecting Zeeman code 
that before could yield assert-errors into try-catch to throw 
runtime_error().  This means I can catch the errors properly in a python 
try-except block.  The speed of the execution of the central parts of 
the code is unaffected in tests.  I need input from the ARTS developers 
if the way I did this is stylistically acceptable or not.


When updating these error handlers, I decided to use function-try-blocks 
instead of in-lined try-blocks.  I shared some code with Oliver, because 
of the errors above, and he suggested against using function-try-blocks 
and follow the traditional system of keeping all the error handling 
inside the main block.  However, he later in the conversation also 
agreed with me that it makes it much easier to pass errors upwards in 
ARTS from the lower functions if we use function-try-blocks since all 
the function calls of a function are then per automatic inside a 
try-catch block.  So we decided to run the stylistic question by everyone.


Please give me a comment on if this is OK stylistically or not in ARTS. 
I find the function-try-block cleaner since all the error-printing code 
is kept away, but if others disagree it just complicate matters.


The easiest demonstration of this change is in the updated 
src/physics.funcs.cc  file.  Please have a 
look at the two "planck()"-functions.  Both versions only throws (const 
char * e) errors themselves and turns them into std::runtime_error 
before re-throwing.  However, this means that the VectorView version of 
the function can see an error that is (const std::exception& e) because 
the catch-block of the Numeric planck() function turns it into one.  And 
since all errors in ARTS has to be runtime-errors for the user, it can 
also know that any upwards forwarding will deal with runtime-errors.


With hope,
//Richard

The src/physics_funcs.cc planck() error handling:

If the planck() Vector-function is sent a negative temperature, the 
error it produces will look as such:

Errors in calls by *planck* internal function:
Errors raised by *planck* internal function:
     Non-positive temperature

If the planck() Vector function is passed a frequency vector as [-1 -0.5 
0 0.5, 1], the error it produces will look as such:

Errors in calls by *planck* internal function:
Errors raised by *planck* internal function:
     Error: Non-positive frequency
     You have 3 frequency grid points that reports a non-positive frequency!

Ps.  To not have to search.

Function-try-block form:  void fun() try {} catch(...) {}

Inline form: void fun() {try{} catch(...) {}}

Same length of code.  Function-try-blocks do not have the variables 
before the throw-statement available for output, they have to be thrown 
to be seen.  However, you can perform most if not all computations you 
wish inside the catch-block.  Like the error counter I made for f-grid 
in the *planck* function's catch above.



___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de

Re: [arts-dev] Documentation request: jacobianAdd* and retrievalAdd* functions

2019-01-21 Thread Patrick Eriksson

Hi Richard,

It's nice that you are trying to use these methods. As far as I know, 
you are the first using the methods not using Qpack.


I will have a look at the jacobianAdd methods and try to explain more 
clearly how many x-elements that are generated by each method.


It seems reasonable that you should get information on involved sizes if 
Jacobian and Sx parts do not match. I assume Simon will look at it (but 
he is taking a course this week, working on a ESA study, ..., so it can 
take some time)


Bye,

Patrick



On 2019-01-21 15:07, Richard Larsson wrote:

Hi,

(This is mainly a question to Simon and Patrick but the dev-list exists 
so I am using it.)


I have been trying out the retrievalAdd* functions for the systems we 
have in Gottingen.  One of the most difficult bit is to figure out is 
how to complete the retrieval setups without running loops around the 
errors being reported.  I might be a complete idiot about this, but the 
documentation and error reporting by ARTS seems far from good here.


I have identified two problems:

1)  The covmat-block size requested by the add-functions are not 
reported in the documentation of said functions.


2)  The error when either retrievalDefClose or the individual 
retrievalAdd* functions fail is not detailed enough to even hint at the 
problem, it simply states that the covmat has the wrong size.


I have suggestions below for how I would fix it if I knew the functions 
well enough.  Ignore these if you want to, but please try to address the 
poor documentation and error somehow.


For the first, each individual retrievalAdd* function would have to be 
addressed.  Some examples of problematic functions: jacobianAddFreqShift 
reports it may be "constant in time", which means covmat _block is 
1-by-1, and jacobianAddSinefit reports "one sine and one cosine term" 
per period-length, or a 2-by-2 uncorrelated covmat_block for every 
period length.  These also sound like reasonable sizes, given that they 
both are just used as baseline-fit for sensor phenomenon (so there is no 
p_grid dependency).  However, of course they fail when you use these 
covmat-block-sizes.  This means there is an error in the method 
documentation.  To fix this, I suggest the increase in size of the 
Jacobian matrix is written clearly in each of the jacobianAdd* 
descriptions.  The same apply for their retrievalAdd* cousins, where the 
size of the covmat-block should be spelled out.


The second point seems even easier to address.  If the internal check 
fails, please report how.  If I see: "I was expecting the Jacobian 
matrix to be 4001 x 510 and the covariance matrix to be 510 X 510.  
Instead, the covariance matrix is 498 X 498", this means that I can 
begin to guess at the error.  Presently, the somewhat nonsensical 
"Matrix in covmat_block is inconsistent with the retrieval grids" is 
used instead, which does not help identify the cause of the problem at all.


With hope,
//Richard

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] freqShift in xaStandard and x2artsStandard

2018-09-18 Thread Patrick Eriksson

Dear Jonas,

Some quick feedback, I am on a conference.

The variables that you can retrieve using ARTS-OEM so far are mainly 
atmospheric quantities. To handle instrument variables I maonly left for 
the future.


A constant frequency switch could be handled by shifting the transitions 
as you tried to do. But frequency shifts are in fact an instrument 
parameter, and moving the transitions will not work for frequency 
stretch. So it seems to be time to implement a general way to handle 
instrument variables. I will discuss with Simon, that is also here at 
the conference.


For the moment I suggest that you do repeated linear inversions. And 
adjusting your instrument frequencies after each linear inversion. This 
should work with your extension of xaStandard. After some iterations 
turn off the frequency switch and make a final inversion.


Kind regards,

Patrick



On 2018-09-17 17:22, Jonas Hagen wrote:

Hello ARTS Developers,

I'm trying to retrieve the Frequnecy Shift along with Wind with the ARTS 
internal retrieval. To my understanding, this should work, but support 
in xaStandard and x2artsStandard WSMs is missing and results in an 
error: "Found a retrieval quantity that is not yet handled by internal 
retrievals: Frequency"


For xaStandard(), the a priori Frequency Shift could easily be set to 
zero after line 793 of m_oem.cc along with baseline and pointing.
For x2artsStandard(), maybe a new WSV would make sense (f_shift) and the 
inversion_iterate_agenda would then call 
abs_linesShiftFrequency(f_shift), similar to the baseline stuff?
I tried to implement it myself but got stuck with jacobian_quantities 
and indices in x2artsStandard().


Best regards,
Jonas Hagen
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] 回复: Fwd: ARTS user

2018-07-02 Thread Patrick Eriksson

Dear Alfred,

I'm glad to learn that the development version can handle the radar 
observation. But, I want to know what this MC module can simulate, the 
reflection coefficient of cloud or the thermodynamic radiation?


The MC module is called MCRadar.

I am not sure about what you mean with "thermodynamic radiation", but 
MCRadar is intended to mimic radar measurements, including multiple 
scattering, attenuation and antenna pattern. So I am assume it returns 
what you are looking for.


For more details contact: Adams, Ian 

The module restricted to single scattering is called iyActiveSingleScat.


As I mentioned before, I ?0?2did some changes to the stable version 2.2.64, 
added a radar transmitter and got the simulated reflection coefficient 
data. However, there ?0?2is no benchmark result for me to verify the 
validity. If the development version can simulate reflection then I can 
compare those two results.


This was in fact my thinking, but not clearly expressed in my email. Why 
not start with light rain and compare to iyActiveSingleScat (as this 
method is very fast), and if all OK continue with MCRadar and cases with 
more strong scattering.


We have not made any extensive comparisons, but in the tests I have done 
iyActiveSingleScat and MCRadar agree as long multiple scattering can be 
ignored. It seems though that MCRadar has a bias just above the surface, 
but not critical as this is inside the clutter zone.


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Transit radius

2018-03-09 Thread Patrick Eriksson

Stefan,

A quick answer.


Is there a smarter way?


Not inside ARTS itself.



And, for the way I outlined, what is the currently recommended way to get out 
tangent altitude and opacity along the los?


Note that you don't need to make a Tb calculation, you can calculate the 
transmission directly by iyTransmissionStandard. A bit quicker and you 
can include particles.


The method TangentPointExtract extracts the tangent point data. First 
element of the vector returned is the tangent altitude (or is it still 
radius in v2.2?).


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Retrieval units and transformations

2017-12-03 Thread Patrick Eriksson

Hi Simon, Richard and all,

I started to think about how to allow a log10 retrieval "unit" for 
scattering quantities. As often happens, this ended with that I want to 
make a general cleaning and reorganisation.


My idea is to move towards a more clear distinction between unit and 
transformations. In addition, we have to deal with two types of 
transformations, linear and non-linear. I think these three shall be 
applied in the order: unit, non-linear and linear.


Comments:

unit: This would now just be pure unit changes, such as "nd" (number 
density). Later I would also like to allow relative and specific 
humidity for H2O. We could also allow "C" for temperature ...


(Units changes will be specific for each retrieval quantity, while 
transformations shoul be implemented in a general manner.)


Non-lin transformations: I would like to remove the "logrel" option (now 
an unit option). And instead generally introduce "log" and log10" 
(without ruling out to add more transformations, such as tanh+1?)


Linear transformations: As already introduced by Simon.

The unit part will be handled by the iy-methods. For the transformations 
I suggest to extend the scope of present jacobianTransform (as well as 
merging it with jacobianAdjustAfterIteration, that handles a rescaling 
for "rel" necessary for iterative OEM). 




All: Comments? Something that I have missed?


Richard: The handling of units seems a bit messy to me. The function 
dxdvmrscf is applied in get_ppath_pmat_and_tmat, but only if 
from_propmat. dxdvmrscf is also applied in 
adapt_stepwise_partial_derivatives. This confuses me, but could be an 
heritage of my old code.
Would it not be simpler if the core code just operates with ARTS default 
unit, i.e. vmr for abs species? And then the conversion is done only on 
the final jacobian values (along the ppath). This should be a general 
function, called towards the end of all iy-methods providing jacobians.
As far as I can see that should work, and should give cleaner code. 
Agree? Or have I missed something?


Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Concept for Radiative Fluxes and Heating Rates in ARTS

2017-11-30 Thread Patrick Eriksson

Hi Freddy, Hi all,

A great idea to brake down the calculations into more steps than we 
discussed in Hamburg. A general OK to your plans, but I think we need to 
establish what terminology to use in ARTS before discussing the details. 
I got dizzy when reading your plan ...


In ARTS we call spectral radiance for intensity, i for short. 
"Intensity" is a vague name, but it would be a big thing to change this 
nomenclature now. But I think we should use more clear names when adding 
new WSVs. On the same time, we have discussed to rename doit_i_field, as 
well as having i_field for both total atmosphere and cloudbox.


I have quickly tried to make a naming suggestion of main WSVs, found 
below. (It seems we both have looked at Wikipedia). I picked flux 
density (fluxd) in favor for irradiance to have a more distinct 
difference to radiance. Can be discussed.


I am not clear about where we need to keep upward and downward streams 
separated in these new WSVs. So I am not sure about the exact tensor 
dimensions needed yet. For example, it seems that you have a directional 
dimension for heating rates. That I don't get? By the way, what unit 
shall we use for heating rates? SI should be K/s!?


Regarding "IfieldFromIycalc1DPP". I have long planned to make a 
ppath1DPP. Would fit well here. The new iyEmissionStandard will make 
this very easy. You will just need to do calculations at TOA and 
surface. (In principle only TOA could suffice with a limitation to 
specular surfaces, but I think it is better to be general despite a bit 
slower)


All written very quickly. Let's discuss during the video con later today.

Bye,

Patrick


Name: Spectral radiance
Unit: W/(m2 Hz sr)
ARTS: i_field [Tensor 7]
ARTS: cloudbox_i_field [Tensor 7]

Name: Radiance
Unit: W/(m2 sr)
ARTS: radiance_field [Tensor 6]

Name: Spectral irradiance
Name: Spectral flux density
Unit: W/(m2 Hz)
ARTS: spectral_fluxd_field [Tensor 5]

Name: Irradiance
Name: Flux density
Unit: W/m2
ARTS: fluxd_field [Tensor 4]

Name: Heating rate
Unit: ?
ARTS: heating rate [Tensor 3]

Angular grids:
field_za_grid
field_aa_grid
cloudbox_field_za_grid
cloudbox_field_aa_grid





On 2017-11-30 14:35, Manfred Brath wrote:

Hello all,*

*I plan to implement functions in ARTS to calculate monochromatic 
(spectral) radiatiative fluxes also called as monochromatic (spectral) 
irradiance, radiatiative fluxes also called irradiance, radiance and 
heating rates from the radiation field. Any comments, suggestions are 
welcome.
For that purpose I would like to implement five new workspace methods, 
which will be explained below.



  RFAngularGridsSet

This method will be similar to DOAngularGridsSet and set up the angular 
grids for the flux calculation but it also calculate the integration 
weights for the zenith angle integration. (Maybe this function can be 
included in a revised version of DOAngularGridsSet)


Input:

n_za
Number of grid points in zenith direction per hemisphere (Index)
n_aa
Number of grid points in azimuth direction per hemisphere (Index,
default=1)
gridtype_az
Defines the type of azimuth grid (string):

  * double_gauss, double gauss in μ =cos θ
  * linear_mu, linear in μ =cos θ
  * linear, linear in θ

Output:

doit_za_grid_size
Number of equidistant grid points of the zenith angle grid, defined
from 0° to 180°, for the scattering integral calculation.
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
za_grid_weights
integration weights for zenith angle integration (Vector, new
workspace variable///
1
Italics indicate new workspace variables
/)


  IrradianceFromIfield

This method will calculate the monochromatic (spectral) irradiance and 
irradiance (radiatiative fluxes). Important, this function will only use 
the first Stokes component of the doit_i_field and iy_unit must be “1”.


Input:

doit_i_field
Radiation field (Tensor7)
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
f_grid
Frequency grid (Vector)
Output:

sir_field
spectral irradiance (radiative flux) (Tensor5, new workspace
variable)) [Wm -2 Hz -1 ]. Size: [Nf, size(doit_i_field, dim=1) ,
size(doit_i_field, dim=2), size(doit_i_field, dim=3), 2 ], last
dimension is upward and downward direction.
ir_field
//irradiance (radiative flux) (Tensor4, new workspace variable)) [Wm
-2 ]. Size: [size(doit_i_field, dim=1) , size(doit_i_field, dim=2),
size(doit_i_field, dim=3), 2 ]


  RadianceFromIfield

This method will calculate the radiance and irradiance (radiatiative 
fluxes). Important, this function will only use the first Stokes 
component of the doit_i_field and the iy_unitmust be “1”.


Input:

doit_i_field
Radiation field (Tensor7)
scat_aa_grid
Azimuth angle grid (Vector)
scat_za_grid
Zenith angle grid (Vector)
f_grid
Frequency grid (Vector)
Output:

r_field
radiance 

Re: [arts-dev] Scattering calculations when the cloud box is switched off

2017-06-07 Thread Patrick Eriksson

Hi Jana and Jakob , hi all,

Before commenting on this particular questions, I see a more general 
discussion here. There are many similar issues. Shall we focus on 
catching potential user mistakes/misunderstandings, or be less 
restrictive to simplify batch/operational processing? For example, I 
discussed with Simon today how OEM shall behave when an error occur 
during the iterations. (And I liked Simon's solution, OEM does not throw 
an error but flags the problem by an output argument. We must trust that 
the user checks that variable.) Hence, it would be good to come up with 
a general strategy. The question is when and how to discuss this?


There will be a bit of ARTS planning next week when Simon and I are in 
Hamburg. Don't know if we will get time to discuss also this, but maybe. 
So you others that will not be in Hamburg, if you have any opinion in 
this matter, send an email so we know about it, if we happen to reach 
this issue.



Regarding the present issue I think it should be possible to use the 
same set-up even if the cloudbox happen to end up to of zero size. If 
there should be some kind of "robust flag" or not, can be discussed.


This in line with my general view. We shall not be too restrictive in 
our tests. Real errors SHALL be caught, but as long as things are 
formally correct I think it is best to let things pass. In the end there 
could be a good reason for doing things that way.


Bye,

Patrick



On 2017-06-07 14:24, Jana Mendrok wrote:

Hi Jakob,

thanks for your feedback!
it was me who did that change. For the reason you also identified - that 
otherwise it easily goes unnoticed that actually no scattering has been 
done. This actually happened to me a few times. And considering that 
when calling the scattering solver, the user intends to actually perform 
a scattering calculation. I understand your issues, though.


Spontaneously, I don't see an option that satisfies both. Below a couple 
of options I can think of to deal with this issue (in the ps some option 
that you yourself could apply. without changes on the official code). 
Would appreciate feedback from other developers (and users), what you 
prefer, what is considered more important (my issues of course seem more 
important - to me. very subjective.). Or maybe you have better ideas how 
to solve that conflict.


so, code-wise we could (either):

- generally go back to the old behaviour.

- stay with the new behaviour.

- introduce a ("robust"?) option to allow the user to control the 
no-cloudbox behaviour.


- make cloudboxSetAutomatocally behave differently for clearsky cases 
(return a minimal cloudbox? and maybe let the user control which 
behaviour - minimal or no cloudbox - is applied?).


wishes,
Jana


ps. Some options, you yourself have, Jakob:

- you can of course locally remove the newly introduced error throwing 
and go back to the old behaviour in your own ARTS compilation.


- with the current version (no-cloudbox throws error) you could make a 
"cloudy" run (with empty results for the pure clearsky cases) and an 
explicit clearsky run and postprocess the results to whatever you need.


- you could use a manually set cloudbox (that can be better for some 
study setups anyways. ensures better comparability between different 
cases as then they equally affected by scattering solver errors 
(sphericity, vertical resolution, interpolation, etc.))



On Wed, Jun 7, 2017 at 1:26 PM, Jakob Sd > wrote:


Hi,

recently there has been a change in the way DOIT and DISORT handle
atmospheres where the cloud box is switched off (cloudbox_on = 0).
Before, they just skipped the scattering calculation, threw a
warning, and everything was ok, as the clear-sky calculations
afterwards took care of it.
But now, they throw a runtime error, which means that the
calculation is stopped and the results will be empty for that
atmosphere. I understand that this runtime error makes sense if
someone wants to calculate with scattering but by mistake switches
off the cloud box. But if someone has a batch of atmospheres from
which some are clear sky atmospheres and uses
cloudboxSetAutomatically, this can be quite uncomfortable, because
all the clear sky atmospheres that were correctly calculated before,
are now empty and the user has to manually select those atmospheres
from his batch and calculate them using clear sky ARTS.

Greetings from Hamburg,

Jakob

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de

https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi





--
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology

[arts-dev] ARTS 2017 workshop

2017-02-22 Thread Patrick Eriksson

Dear radiative transfer friend,

We are pleased to announce a new workshop in the series of "ARTS 
workshops". You don't need to be an ARTS user or developer to 
participate, the workshop is open for all with an interest in 
atmospheric radiative transfer. There is normally a strong focus on 
microwave to infrared radiative transfer, but also other wavelength 
regions are of interest.


The place is the same as last time, Kristineberg (about 100 km north of 
Gothenburg). Time for the actual workshop: September 6-8 (Wednesday 
morning to Friday lunch). We will arrange transport between Gothenburg 
and Kristineberg, and it will departure from Gothenburg around 15.00 
September 5. That is you need to arrive to Gothenburg not too late Sep 
5, and should have possibility to travel back Sep 8.


The general goal of the workshop is as usual, that the ARTS user 
community (and also people working with other RT models) can meet, get 
to know each other, solve practical problems, and discuss the further 
development of the program. As always, we will have only a relatively 
small number of talks, and instead more time for group work and 
discussions. The present/planned main development of ARTS is directed 
towards

- Faster scattering calculations
- Running OEM inside ARTS
- Non-LTE
but the workshop is not restricted to these topics.

If you are interested in participating, then please fill in the 
pre-registration form at


https://arts.mi.uni-hamburg.de/service/workshop/arts2017.php

in order to allow us plan the program. The deadline for pre-registration 
is March 31. Since the available space at Kristineberg is limited, we 
have to limit the meeting to roughly 25 persons. If more persons are 
interested, it will be first come first served.


Kristineberg is a marine research station. The station offers full board 
and lodging, but the number of rooms is limited and most workshop 
participants will need to share double rooms. At the moment there are 
only three single rooms at hand. If you require a single room indicate 
this under Comments. Transport to/from Gothenburg is arranged at 
start/end of workshop. You only pay for room and food at Kristineberg. 
We can not yet give you en exact price, but it should be in the order of 
250 euro.


We send our best regards and hope to see you at Kristineberg,

Patrick Eriksson,
Stefan Buehler

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Fwd: 1st Summer Snowfall Workshop, 28-30 June 2017, Cologne, Germany

2017-01-19 Thread Patrick Eriksson

Hi all,

Information found below about a workshop with strong connection to 
ongoing ARTS development. For example, there should be some workshop 
contribution(s) associated with the database of single scattering 
properties we are developing.


Bye,

Patrick




 Forwarded Message 
Subject:1st Summer Snowfall Workshop, 28-30 June 2017, Cologne, Germany
Date:   Wed, 18 Jan 2017 14:38:34 +0100
From:   Stefan Kneifel 
To: 	zhad...@jifresse.ucla.edu , 
jani.tyyn...@fmi.fi , patrick.eriks...@chalmers.se 
, robin.ekel...@chalmers.se 
, Chris Westbrook 
, jussi.s.leino...@jpl.nasa.gov 
, zxj...@psu.edu , 
kwo-sen@nasa.gov , f.pr...@isac.cnr.it 
, christopher.willi...@colorado.edu 
, davide.o...@unibo.it 
, toshihisa.matsu...@nasa.gov 
, benjamin.t.john...@noaa.gov 
, e...@psu.edu , 
jana.mend...@gmail.com , Munchak, Stephen J. 
(GSFC-6120) , karina.mccus...@pgr.reading.ac.uk 
, alan.g...@ecmwf.int 
, ian.ad...@nrl.navy.mil , 
rhoneya...@gmail.com , David Mitchell 

CC: 	Moisseev, Dmitri , Mark Kulie 
, Claire Pettersen , 
gwpe...@wisc.edu, Pavlos Kollias, Prof. , 
Maximilian Maahn , g...@fsu.edu, 
mircea.grec...@nasa.gov, Prigent Catherine , 
Hans Verlinde , Matthew Kumjian , 
Alexander Ryzhkov - NOAA Affiliate , Silke 
Troemel , 'Clemens Simmer' 
, Robin Hogan , Battaglia, 
Alessandro (Dr.) , Tridon, Frederic (Dr.) 
, alexis.be...@epfl.ch




_*Save-the-date*_

*1st Summer Snowfall Workshop
*

*Scattering and microphysical properties of ice particles*


*28-30 June 2017, University of Cologne, Germany*


Dear colleagues,

as a follow-up of the productive discussion about microwave ice and snow 
scattering properties at the last IPWG-IWSSM workshop in Bologna, we 
want to organize a 3-day workshop at the University of Cologne (Germany) 
from 28-30 June 2017.


The main goals of this workshop are to:


_discuss the progress in developing single scattering databases:_

  * Existing and ongoing scattering databases
  * Definition of scattering data structure and conventions
  * Scattering database interface tools
  * Scattering database repository

_their applications:_

  * Bulk scattering properties
  * Scattering approximations
  * Particular requirements for passive and active applications
  * Guidelines and best practices for database users

_and to bridge the gap between scattering and microphysical properties 
of snow:_


  * In-situ properties of ice and snow particles
  * Observational constraints for scattering datasets


For the preparation of the workshop we would appreciate if you could 
provide us your feedback about your interest in attending the workshop 
before February 3rd 2017. More information and the link to the 
registration page will follow in a separate email.



Kind regards,

Stefan Kneifel and Dmitri Moisseev


--
***
Dr. Stefan Kneifel
OPTIMIce Emmy-Noether Group Leader
Institute for Geophysics and Meteorology
University of Cologne
Pohligstrasse 3 (Room 3.103), 50969 Cologne, Germany
sknei...@meteo.uni-koeln.de
Phone: +49 221 470 6272
---
http://www.researchgate.net/profile/Stefan_Kneifel
http://www.geomet.uni-koeln.de
***

___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] ARTS workshop 2017

2016-12-21 Thread Patrick Eriksson

Hi all,

Our Christmas present to you!

We have now decided to arrange a new ARTS workshop. We will keep the 
basic format of the workshop, and the venue will be the same as last 
time, Kristineberg north of Gothenburg.


We are aiming for August 30 to Sep 1 (2017). If this time period does 
not work for you, inform us and we will consider to change dates. (But 
we will only change if we have missed a clash with some important 
conference or something similar).


The official workshop announcement will come early next year.

Merry Christmas and a happy new year,

Patrick and Stefan
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Abs lookup, introduce t_grid?

2016-11-15 Thread Patrick Eriksson

Hi Stefan,

I understand that setting abs_t to be constant is an option already now, 
but that does not give a speed improvement!? Or is there an internal 
switch that is triggered that I don't know about?


Bye,

Patrick


On 2016-11-15 15:06, Stefan Buehler wrote:

Hi Patrick,

you can use a constant reference T profile already now. I think it
roughly doubles the size of the lookup table, though.

/Stefan

On Tue, 15 Nov 2016 at 13:37, Patrick Eriksson
<patrick.eriks...@chalmers.se <mailto:patrick.eriks...@chalmers.se>> wrote:

Hi all,

I struggled a bit to set up absorption lookup tables for our Odin/SMR
processing. For some frequency modes we extend the retrieval into the
thermosphere, and this causes problems. My reference temperature profile
is about 170 K at the mesopause, and accordingly abs_t_pert can not go
below about -160 K. This means that I have only a 160 K margin downwards
in the thermosphere, which is by far too narrow.

My present solution is to not allow the reference temperature be above
300 K, and instead have a abs_t_pert going to high positive values (+600
K). This works and is OK.

However, this got me to think. The simplest for me would in fact to set
abs_t to be e.g. 250 K at all altitudes, that would basically would give
me a fixed t_grid. Further, with modern computers where memory is not a
problem, maybe it is time to give up on using abs_t + abs_t_pert, and
instead just having a t_grid. That is, to have a standard "rectangular"
set-up, with a standard pressure and temperature grid.


I suggest to switch to a fixed t_grid as I think it could speed up the
interpolation significantly. I assume that with present abs-table, new
temperature grid positions must be calculated for each altitude (as
abs_t(i)+abs_t_pert varies). Stefan: Can you confirm this? Have you
considered the speed impact of this?

With a fixed t_grid, a given temperature has the same grid position at
all altitudes.


Any comments?

Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
<mailto:arts_dev.mi@lists.uni-hamburg.de>
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Abs lookup, introduce t_grid?

2016-11-15 Thread Patrick Eriksson

Hi all,

I struggled a bit to set up absorption lookup tables for our Odin/SMR 
processing. For some frequency modes we extend the retrieval into the 
thermosphere, and this causes problems. My reference temperature profile 
is about 170 K at the mesopause, and accordingly abs_t_pert can not go 
below about -160 K. This means that I have only a 160 K margin downwards 
in the thermosphere, which is by far too narrow.


My present solution is to not allow the reference temperature be above 
300 K, and instead have a abs_t_pert going to high positive values (+600 
K). This works and is OK.


However, this got me to think. The simplest for me would in fact to set 
abs_t to be e.g. 250 K at all altitudes, that would basically would give 
me a fixed t_grid. Further, with modern computers where memory is not a 
problem, maybe it is time to give up on using abs_t + abs_t_pert, and 
instead just having a t_grid. That is, to have a standard "rectangular" 
set-up, with a standard pressure and temperature grid.



I suggest to switch to a fixed t_grid as I think it could speed up the 
interpolation significantly. I assume that with present abs-table, new 
temperature grid positions must be calculated for each altitude (as 
abs_t(i)+abs_t_pert varies). Stefan: Can you confirm this? Have you 
considered the speed impact of this?


With a fixed t_grid, a given temperature has the same grid position at 
all altitudes.



Any comments?

Bye,

Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] ARTS' SingeScatteringData

2016-11-09 Thread Patrick Eriksson

Hi all,

We (Jana and Patrick) have discovered several issues the last weeks
around the SingleScattering data format.

1. Definition of direction
-
A direction can be specified by how the photons move, or in what
direction you observe to detect the photons. The radiative transfer
functions in ARTS use the later definition, and call this a
line-of-sight (LOS). We have not found a clear statement if the
scattering data format assumes photon directions or LOS. In fact,
different assumptions have been made in DOIT and MC. In MC, LOS values
are mirrored before extracting scattering properties, while this is not
done in DOIT.
Our discussion of scattering data follows Mishchenko et al (2002) and we
should stick to it. With this interpretation, presently MC is doing the
right thing. As far as we understand, the issue has no influence on DOIT
for random orientation. For horizontally aligned particles, all is OK
for stokes_dim 1 and 2 (due to reciprocity), but there are issues
for higher stokes_dims (namely sign errors in the lower left and upper 
right

matrix blocks).


2. Azimuth angle
-
In ARTS' definition of LOS the azimuth angle is counted clockwise, while
for scattering data the azimuth angle goes in the opposite direction
(Fig 6.1 in ATD, and is consistency with Mishchenko et al (2002)). This
is not considered by either MC and DOIT, and should give a sign error
for stokes_dim 3 and 4.


3. Format for "horizontally aligned"
-
We have now realized that this format is not as general as we (at least
JM+PE) thought. It does not treat all horizontally aligned or azimuthally
randomly oriented particles. The (orientation averaged) particles must
also be symmetric around the horizontal plane. Such a symmetry will
rather be the exception when working with arbitrarily shaped particles
(and using DDA) and also, e.g., excludes realistically shaped rain drops.
We could introduce a new format for this, but that would make code and
documentation even more complicated.
Expressed simply and discussing the phase matrix, we currently store the
left part of the matrix holding data for incident and scattered zenith 
angles
(in table cols and rows, respectively).  By making use of the 
reciprocity theorem,
we could get away by storing just the upper triangle, i.e. with the same 
amount
of data as now. But that would make the internal storage more 
complicated and
require more heavy calculations to extract the data (not just sign 
changes are
needed, a transformation matrix, though simple, must be applied). So we 
just
simply suggest that we store the complete phase matrix. That is, the 
incoming
zenith directions will be [0,180] and not just [0,90] as now. And to 
keep things as

simple as possible we suggest to do the same for abs_vec and ext_mat.
We don't need to change the fields in the data format, but this should 
still be a

new version of the format. And when we are introducing a new format we would
also like to rename the "ptypes" as well, as "horizontally_aligned" is 
not a good

name when we start to work with tilted particles. We suggest the names
  totally_random
  azimuthally_random

(We are not 100% sure about some of the theoretical details, but the
three main remarks should still be valid.)

Any comments or opinion?

We (mainly Jana) plan to start attacking these things relative soon. If
anybody wants to help out in the revision, please let us know.

Bye,

Patrick and Jana
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] New ARTS features

2016-10-19 Thread Patrick Eriksson

Dear ARTS users,

after a period with somewhat slower development of ARTS, we are again in
an active period. Some stuff has already been added and we are planning 
some more additions. With this email we want to briefly announce these 
features, and on the same time clarify how we add new features and how 
we prefer that they are used.


We are as usual committing our changes to the development version
(presently v2.3, later v2.5 or v3.1). Hence, the additions are available
from day 1. Or rather, the additions are at hand already before they are
ready and properly tested.

The alternative would be an internal development branch and releasing
the additions only after we have tested the new feature, and published
an article about it. We want to avoid this. It would make the
maintenance of ARTS more complicated.  More importantly, it would reduce
the number of persons that give feedback and contribute to the testing
of the new feature.

That is, we happily see that you are using new and experimental 
features, as long as it is done in collaboration with us. You will then

get help to make sure that the new feature is used as intended, and we
get feedback that helps us to improve things. To be clear, we prefer
that the main developer(s) on our side is included in publications where
new ARTS features are used. The normal end point of this period is when
we have made a publication that introduces the addition.

Here is a list of recent and planned additions, and the main person to
contact if you want to start using it:

DISORT and RT4: Jana (more or less clear additions)

OEM: Patrick (ready, but with limited scope compared to Qpack)

Single scattering data: Robin/Patrick and Manfred (first data should be
added to ARTS site soon)

Running 1D scattering solver on 3D atmospheres: Patrick (to be implemented)

Oxygen line mixing: Richard Larsson

non-LTE: Richard Larsson (in early development)

New standard setups for meteorological sensors: Alex Bobryshev

More robust DOIT scattering solver: Jacob / Stefan

TYPHON Python interface: Lukas / Oliver

DOIT Jacobians: Jana

Mapping of LWC, IWC and RWC to pnd_fields: Jana / Manfred / Verena

New surface features: Patrick

Email address to persons mentioned found in cc. If you have just general
questions about these or other ARTS features, please send the question
to arts-users instead. On our side, we will try to make a small
announcement on the arts mailing lists when we consider a new feature to
be relatively stable and could be of interest for others. That is, more
information will follow.

Kind regards,

Stefan and Patrick
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi