Re: [arts-dev] Fwd: Clouds in ARTS

2019-11-06 Thread Jana Mendrok
 you either generate inside ARTS with
> T-matrix or take it from our "scattering database".
>
> Some brief comments. If you tell me what version you actually are using,
> I can provide more detailed help.
>
> Bye,
>
> Patrick
>
>
> On 2019-11-04 22:07, Claudia Emde wrote:
> > Dear Arts-Developers,
> >
> > here is a question about how to include clouds in ARTS. Since I am not
> > up-to-date, I forward this message to you.
> >
> > Best regards,
> > Claudia
> >
> >
> >  Forwarded Message 
> > Subject:  Clouds in ARTS
> > Date: Mon, 4 Nov 2019 17:40:47 +
> > From: Werner, Frank (329D) 
> > To:   claudia.e...@lmu.de 
> >
> >
> >
> > Hi Claudia,
> >
> > The MLS satellite team here at JPL has recently started using ARTS, in
> > addition to the in house radiative transfer algorithms. Michael Schwartz
> > and I have been the two people playing around with ARTS, trying to
> > incorporate it as another RT option in our code base. We are almost at
> > the point where we have ARTS as another plug-and-play option for our
> > retrievals.
> >
> > One of the last remaining issues is handling of clouds. As far as I can
> > tell, all I have to do is turn the ‘cloudbox’ on and add hydro meteors
> > via ‘ParticleTypeAdd’. Is there a simple example for some cloud
> > absorption you can send me? It doesn’t need to be super realistic or
> > anything. As far as I can tell, the workspace method needs scattering
> > properties and number densities. All I could find in the standard ARTS
> > data sets is the Chevallier_91L stuff in
> > ‘/controlfiles/planets/Earth/Chevallier_91L/’.
> >
> > Again, a simple example of some cloud absorption would be appreciated.
> > Thanks for your help!
> >
> > Best wishes,
> >
> > Frank
> >
> > --
> >
> > Frank Werner
> > Mail Stop 183-701, Jet Propulsion Laboratory
> > 4800 Oak Grove Drive, Pasadena, California 91109, United States
> > Phone: +1 818 354-1918
> >
> > Fax: +1 818 393 5065
> >
> >
> > ___
> > arts_dev.mi mailing list
> > arts_dev.mi@lists.uni-hamburg.de
> > https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
> >
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>


-- 
=
Jana Mendrok, Ph.D.
Deutscher Wetterdienst
Frankfurter Str. 135
63067 Offenbach am Main, Germany

Email: jana.mend...@dwd.de
Phone : +49 (0)69 8062 3139
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] ARTS scattering database interfaces

2019-08-14 Thread Jana Mendrok
Hi Stuart,

I guess, it's me to blame here. I did some testing, but obviously not
exhaustive (enough) ;)
Maybe it went unnoticed since there is a also matlab version of this, and
maybe everyone who applied the method so far has used matlab. (at least
Patrick. Who surely used the interpolation routine. But very likely in
matlab.)

Back when we developed this, we had an internal repository, which likely
still exists, but I don't think there is a semi-public one like for ARTS
and other ARTS-related tools. Yet. Could be a good thing to have, but
that's up to Patrick and /or Stefan to decide and initiate.
Would likely be good, if your changes are at least put into the internal
repository. But I have to leave that to others of the ARTS-DB project group.

Best wishes,
Jana

On Fri, Aug 9, 2019 at 11:10 AM Fox, Stuart 
wrote:

> Hi developers,
>
>
>
> I’ve been trying to use the Python assp methods (downloaded from the
> Zenodo link) to work with some single scattering data. Specifically, I have
> been trying to use assp.assp_interp_size to interpolate some optical
> properties onto a common size grid. However, there seems to be a number of
> bugs and typos in the code which mean that it doesn’t run.
>
>
>
> I think I’ve fixed the code so that it “works” (attached). However, I’m
> curious to know if anyone has actually used this method before, and if
> there are any potential pitfalls I should be aware of? Is there a
> repository anywhere for the database interfaces for up-to-date versions etc?
>
>
>
> I also had to make some changes to typhon to get things running – see
> https://github.com/atmtools/typhon/pull/308
>
>
>
> Cheers,
>
>
>
> Stuart
>
>
>
> Dr Stuart Fox  Radiation Research Manager
>
> *Met Office* FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
> Tel: +44 (0) 330 135 2480  Fax: +44 (0)1392 885681
> Email: stuart@metoffice.gov.uk  Website: www.metoffice.gov.uk
> See our guide to climate change at
> http://www.metoffice.gov.uk/climate-change/guide/
>
>
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>


-- 
=
Jana Mendrok, Ph.D.
Deutscher Wetterdienst
Frankfurter Str. 135
63067 Offenbach am Main, Germany

Email: jana.mend...@dwd.de
Phone : +49 (0)69 8062 3139
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] MDPI Atmosphere Special Issue "Radiative Transfer Models of Atmospheric and Cloud Properties"

2019-05-10 Thread Jana Mendrok
Hi,

Franz (Schreier) hinted me at MDPI Atmosphere journal currently having a
special issue on "Radiative Transfer Models of Atmospheric and Cloud
Properties", open for submission till the end of the year. Maybe of
interest for upcoming ARTS (related) publications?

https://www.mdpi.com/journal/atmosphere/special_issues/radiative_transfer_models

Wishes,
Jana


-- 
=
Jana Mendrok, Ph.D. (Geoscience)

Email: jana.mend...@gmail.com
Phone : +46 (0)708 860 729
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Fwd: abs_linesReadFromHitran

2019-01-31 Thread Jana Mendrok
Hi,

we recently stumbled over ARTS' strict HITRAN catalogue naming requirements
(see m_abs.cc, l247-270).

first, these name requirements are not at all documented for the user
(asche auch auf mein haupt...). should probably be added.

furthermore, i think, this method at least needs an update (if not a more
flexible solution): H2016 is long out and it's tedious for a user to have
to rename that to 2012 to fit ARTS' requirements. Moreover, HITRAN is by
default not the all-in-one file anymore (so an updated method to digest
multiple HITRAN par files would be more user friendly) and is not named
HITRAN16 or HITRAN2016 by default anymore (which makes our name-checking
somewhat obsolete from H2016 on...).

i might manage to make the simple if-clause extension for 'HITRAN2016'
myself these days (just have the problem that my notebook basically died.
maybe i can borrow and master someones *hard coughing* mac book...).
however, the rest would need a bit more consideration, so I leave that to
you.

best wishes,
Jana

ps. from 1st May I will start to work for DWD (the german weather
service/agency) on polarimetric weather radar modeling. so, RT again (or
wave optics. i'm not fully sure yet :-/ ), no ARTS but maybe making use of
the ARTS scattering database at some point, particularly when melting
particles make their way into it (or even just non-spherical raindrops).

-- 
=====
Jana Mendrok, Ph.D. (Geoscience)

Email: jana.mend...@gmail.com
Phone : +46 (0)708 860 729
=


-- 
=====
Jana Mendrok, Ph.D. (Geoscience)

Email: jana.mend...@gmail.com
Phone : +46 (0)708 860 729
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] [s-c-rt] r10866 - in arts/trunk: . controlfiles/artscomponents/absorption controlfiles/artscomponents/faraday controlfiles/artscomponents/helpers controlfiles/artscomponents/hybridscat

2018-02-09 Thread Jana Mendrok
Hi,

So those tests should have been commented out during this 'intermission'
> but were forgot.  I understand.
>

no, i don't think so. if commented out, they will be forgotten. we just
live with a broken (slow) test case for a bit... (it's all iyRadioLink
references. Not sure what is Patrick's mid-term plan about that. Leave that
to fix for him in any case.).



> 76) Ptype value (20) is wrong.It must be (... 20 is in the list
>>> ...)
>>> 87) Ptype value (20) is wrong.It must be (... 20 is in the list
>>> ...)
>>> Since this is unrelated to my change, I will ignore this but
>>> someone
>>> should investigate...
>>>
>>
>> 76) and 87), i guess, have to do with changes i recently made to ptype
>> enum tags. i'll have a look and fix that.
>>
>
had a look at those two. they run fine here. i guess you haven't updated
your arts-xml-data.

wishes,
Jana

-- 
=
Jana Mendrok, Ph.D. (Geoscience)

Email: jana.mend...@gmail.com
Phone : +46 (0)708 860 729
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] [s-c-rt] r10866 - in arts/trunk: . controlfiles/artscomponents/absorption controlfiles/artscomponents/faraday controlfiles/artscomponents/helpers controlfiles/artscomponents/hybridscat

2018-02-07 Thread Jana Mendrok
Hej Richard,

thanks for the note.

Ps.  Three controlfiles fail the slow test currently:
> 8 - arts.slow.doc.uguide.refs
> 76 - arts.ctlfile.xmldata.artscomponents.doit.
> TestDOITpressureoptimization
> 87 - arts.ctlfile.slow.artscomponents.wfuns.TestDoitJacobians
> These fail because:
> 8) Many unknown WSM
> 76) Ptype value (20) is wrong.It must be (... 20 is in the list
> ...)
> 87) Ptype value (20) is wrong.It must be (... 20 is in the list
> ...)
> Since this is unrelated to my change, I will ignore this but
> someone
> should investigate...
>

76) and 87), i guess, have to do with changes i recently made to ptype enum
tags. i'll have a look and fix that.

8) is due to that we're currently revising all RTE solvers, updating them
to "your" path integration scheme (as you know). once that is done, someone
(patrick maybe...) will go over the doc and update it, i assume. it makes
little sense before the work is finished.

wishes,
Jana



-- 
=====
Jana Mendrok, Ph.D. (Geoscience)

Email: jana.mend...@gmail.com
Phone : +46 (0)708 860 729
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] RT4 and auto-increasing number of streams - a cautionary note

2018-01-26 Thread Jana Mendrok
Hi,

Victoria, Patrick, great that you were aware of that! cause, last time i
talked with someone (manfred, maybe?), I myself was not aware of it
anymore. realising that is why I sent this note in the beginning...

regarding Patrick's question - at least the intensity component is getting
smoother with more scattering, i.e. the interpolation does not get more
serious with stronger scattering (i don't have a good feeling for higher
stokes components. there it might depend, e.g. on the surface properties.).

My main concern comes from the interplay of RT4 with ARTS: Using the
auto-increase-streams feature, first the radiation field is calculated with
high(er) angular resolution (as high as the phase-function norm
reproduction (aka energy conservation) requires), then this
high-angular-resolution-field is interpolated onto the low-resolution
original angular grid in the RT4 interface and last, in
(i)yCalc/iyEmissionStandard, the radiation value at a given angle is
extracted by interpolation.

It's this 2-step interpolation with the first one degrading resolution that
worries me (it feels safer to have only one interpolation step. or,
alternatively, to at least go from low to high resolution in the first
step, not the other way round. either requires some additional internal
logistics, though - which is why it hasn't been done like that yet.)

Moreover, the iyEmissionStandard interpolation is hardcoded to a linear
interpolation (in angle space) so far - which should maybe be replaced in
any way, since this is particularly bad; both higher order polynomials and
interpolation in cos(angle) space seem (!) to give better results.
Interpolation in the RT4 interface can be controlled by the user, selecting
the interpolation order and the interpolation space (angle or cos(angle));
the default so far, though, is also linear in angle space (for consistency
with iyEmissionStandard and because it's not very clear, which setup is
best. or consistently good. would need a bit more testing... if anyone has
time...).

Victoria, I get back to you separately regarding your specific questions.

best wishes,
Jana


On Wed, Jan 24, 2018 at 8:00 PM, Victoria Sol Galligani <
victoriasgallig...@gmail.com> wrote:

> Hello everyone! Hi Jana!
>
> I was aware of what you are saying about the number of streams. However
>  in the context of the ARTS scattering methods + RTTOV-scatt
> inter-comparison I am working on, I have been testing the sensitivity of
> the output TBs to the number of streams chosen. In this regard, I think it
> would be interesting to run RT4 even if the interpolation induces large
> errors and the scattering matrix is not resolved well. If I run RT4 without
> this warning (actually error because arts doesn't run with its current
> checks) I could easily answer your answer Patrick (if the angle
> interpolation error increases with strong scattering), because I'm running
> with different profiles, some with much more scattering than others. At the
> moment its clear to me that more streams are needed for more scattering
> cases, so I guess that already answers your question Patrick ...
>
> Do you have any advice regarding turning this warning/error off on my arts
> distribution?
>
> Looking forward to sharing my preliminary results on this subject,
> Hugs,
>
> Victoria
>
>
> On Mon, Jan 22, 2018 at 4:56 PM, Patrick Eriksson <
> patrick.eriks...@chalmers.se> wrote:
>
>> Hi Jana,
>>
>> I was aware of that the auto-increase does not change the number of
>> output angles, but still thanks for the warning.
>>
>> That is, for me it was understood that you should check that the start
>> number of streams is high enough, to make sure that angle interpolation
>> does not induce too large errors. But not clear to me is if the angle
>> interpolation error increases with strong scattering. This is something
>> that I have never tested.
>>
>> Your comments seem to indicate that it is the case (i.e. increasing
>> errors), but has you tested it? Without that it is a bit hard to decide
>> what to do.
>>
>> Anyhow, my recommendation is that you focus on general cleaning and
>> documentation. Leave any possible extension of RT4 on this point to us
>> others, if we find it necessary.
>>
>> Bye,
>>
>> Patrick
>>
>>
>>
>>
>>
>> On 2018-01-22 17:46, Jana Mendrok wrote:
>>
>>> Hi,
>>>
>>> With some of you we have discussed (even suggested) the use of the
>>> auto-increase number of streams feature of the RT4 interface.
>>> The background to this feature is that RT4 needs the scattering matrix
>>> to be properly resolved in order to conserve the scattered energy
>>> satisfactori

[arts-dev] RT4 and auto-increasing number of streams - a cautionary note

2018-01-22 Thread Jana Mendrok
Hi,

With some of you we have discussed (even suggested) the use of the
auto-increase number of streams feature of the RT4 interface.
The background to this feature is that RT4 needs the scattering matrix to
be properly resolved in order to conserve the scattered energy
satisfactorily. Using the feature, the number of streams is internally(!)
increased until the scattering matrix resolution is deemed sufficient.

The crucial issue, i just got reminded of when i went through the code, is
that this increase is only done internally. the output will remain on the
original number of streams set!

That means, *one should not start with a very low number of streams* and
should not let the system completely self-adapt (as, strictly speaking,
that's not what it is doing - the output dimensions won't adapt and will
always remain the original/starting number of streams) (i'm going to add
that info to the online doc).

It should be kept in mind, that - unlike Disort - neither RT4 nor ARTS
itself have good interpolation options for "off-stream" angles, i.e. the
number of streams RT4 is set up with does not only determine the RT4
solution accuracy (this is improved/adapted with the auto-increase
feature), but also the number of output directions (not affected by
auto-increase), hence the accuracy with which the field is known to and
further applied within other WSM of ARTS.

Best wishes,
Jana


Ps.  Something more for developers...

Thinking about that, this seems quite inconvenient. So, the question what
to do about it, how to change that. Two possible solutions pop into my head:

(1) instead of interpolating the high-stream-solution to the low number of
streams, we could interpolate everything to the highest number of streams
and output the "high-resolution" field.

(2) re-define (doit_)i_field from Tensor7 into a ArrayOfTensor6 with one
Tensor6 entry per freq. this would allow to have different angular
dimensions per frequency (we'd need to also store the angles per each
frequency). however, that would affect the output of other solvers, too,
and the way (doit_)i_field is applied in (i)yCalc.

so, option (1) seems less of a hassle.

neither option will solve all issues (like, even if the radiation field is
quite smooth, linear interpolation from low-resolution fields won't be very
good and higher-order interpolation intrinsically requires, well, higher
numbers of streams), but at least some (like better conserving the shape of
the radiation field, where derived from higher number of streams).

any opinions? do you consider this an issue at all, or is a cautionary note
in the documentation enough? if an issue, any better ideas on solutions or
opinions on "my" two options?

and, anyone other than me willing to implement possible changes?


-- 
=========
Jana Mendrok, Ph.D.

Email: jana.mend...@gmail.com
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] About "abs_coefCalcFromXsec"

2017-07-13 Thread Jana Mendrok
Dear Xiaoying,

there is no way in ARTS (yet?) to use HITRAN absorption cross sections
(except for collission-induced continuum absorption. but that's obviously
not what you are after.). sorry.

Best wishes,
Jana

On Thu, Jul 13, 2017 at 3:28 PM,  wrote:

> Dear Sir:
>
>
>
> I have some questions about ARTS again.
>
>
>
> Some species (O3 H2O CO CH4 ……) can read spectral lines from line by line
> file, such as Hitran2012.par. However, at the same time, I want to
> calculate absorption coefficients from
>
> absorption cross-sections for some species, such as CCl4, F11…….
>
>
>
> From the user guide, I think the ‘abs_coefCalcFromXsec ‘ may do my job.
> But I still have no idea how to apply the method of abs_coefCalcFromXsec?
> How to describe the species which will calculate absorption coefficients
> from absorption cross-sections?  Would you please help me out? I really
> appreciate your help.
>
>
>
> Best regards!
>
>
>
>
>
> Yours sincerely.
>
>
>
> Xiaoying Li
>
>
> --
>
> Li Xiaoying
>
> Associate Professor
>
> Institute of Remote Sesing and Digital Earth, CAS
>
> State Key Laboratory of Remote Sensing Science
>
> P.O.Box9718,Beijing 100101,China
>
>
>
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>
>


-- 
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth and Environment
SE-412 96 Gothenburg, Sweden

Email: jana.mend...@chalmers.se
Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Using amplitude matrices in scattering data?

2017-06-27 Thread Jana Mendrok
Hi again,


I think averaging over orientations and over sizes both is not possible.
> Since we decided to store optical properties for randomly oriented or
> horizontally aligned particles and for e.g. gamma size distributions, it
> was not possible to use the amplitude matrix.
>

As Claudia and Cory mentioned, amplitude matrices are only easily
applicable for individual particles, not for bulks (sorry, Robin. I just
read up on that now...). They are not additive like the scattering
matrices. That shouldn't mean, though, that we can't use them in the
database for saving memory. we need to extend the interface to ARTS format
output, though.

It should be possible to also have bulk properties stored in the form of an
"effective" amplitude matrix (this since the far field approximation, where
the amplitude matrix converts the incident electric field to the scattered
one, is valid both for individual particles and for bulks). Just, the
derivation of this effective matrix isn't nearly as straight forward as
that of bulk scattering matrices (i'm not convinced my math abilities are
sufficient to figure it out. but someone else's should. i guess.).

Any averaging/integration/interpolation would need to be done in scattering
matrix space (if i understand correctly). So one probably doesn't want to
work with amplitude matrices in the RT solution part of ARTS. But that
doesn't exclude, in my understanding and view, to hold the basic data (ie.
the scat_data, from which we anyway need to extract (or that we anyway need
to transform to) ext_mat and pha_mat) in terms of amplitude matrices.
Again, doing so, would require some conversion tools (in atmlab/typhon/the
database interface) that allow to derive "effective" bulk amplitude
matrices.



> Indeed we save a lot of memory, when we store optical properties for a
> size distribution rather than for individual sizes.


it can (it doesnt for higher atmospheric dimensionalities, when the bulk
properties are independent for each atmospheric grid point. or do we miss
something?)

and, the issue primarily popped up regarding storing optical properties of
irregular particles from DDA calculations. where we want to keep them as
(quasi-)monodispersions in order to be able to apply arbitrary size
distributions later on. for azimuthally randomly oriented particles plain
storage is an issue, which is why we're looking for ways to
optimize/minimize that.

my 2 cents.
best wishes,
Jana



-- 
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth and Environment
SE-412 96 Gothenburg, Sweden

Email: jana.mend...@chalmers.se
Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Propagation matrix representation

2017-06-27 Thread Jana Mendrok
Hi,


> F = exp(A), and A =
>
>  a  b  c  d
>  b  a  u  v
>  c -u  a  w
>  d -v -w a
>
> This seems to be the case for all matrices that we are concerned about in
> the propagation parts of ARTS, even in the scattering cases.  Is this true?
>

After a bit of reading (e.g. del Toro Iniesta's "Spectropolarimetry"), I
come to the conclusion that all RT propagation matrices (i.e. the K)
exhibit this structure. It's an effect of the geometry relations between
electric/magnetic field and polarization frame. And it's valid for
individual (wave) K as well as the "bulk" K (over different medium
constituents and wave interaction processes).


Got me thinking, though, that we are wasting a lot of memory and a lot of
> computing time keeping the entire 4X4 propagation-matrix [...] rather use
> Vectors of these parameters to represent the matrix.  However, that would
> be overly simplistic and might not be beneficiary enough to justify the
> extra work.
>

Seems a clean solution to redefine ext_mat (and similar containers) to only
hold the (max) 7-element vector.

As Patrick mentioned, for scattering data we do something similar already.
at least for the input data (when needed in the RT solution, the data are
currently also converted to the up to 4x4 matrix).

The general particle class is not implemented, i.e. the 7-element format is
not fixed. for the azimuthally random case we use order Kjj, K12, K34 (or,
for the A definition above: a, b, (-)w). This, however, is not directly
applicable for the all-sky RT ext_mat format since that is rather governed
by the stokes dimensionality instead of symmetry relations (e.g., for
stokes_dim=2 we don't need the K34 while for stokes_dim>2 we need
additional elements that are 0 in the azimuthally random scattering
particles case).

>From the view point of elements we need for the different stokes_dim
levels, format should be like [ a, | b, | c,u, | d,v,w ], i think (the
vertical dashes indicate the limits for the different stokes_dim, i.e. the
reduced ext_mat would have length 1,2,4, or 7 for stokes_dim=1,2,3,4
respectively).



>
> Instead, I would like to propose a similar class, PropagationMatrix, that
> can store the entire propagation matrix in parameterized form that is
> reduced to the seven variables above by simple mechanisms.
>

I can not judge, whether it's advantageous to further parametrize the K
contributions from different physical processes. In any case, I don't see
that this would be mutually exclusive with a matrix-to-vector format change
of ext_mat (since, as written above, ANY K has to exhibit the structure of
A, i.e. all parametrizations anyway happen only on the 7 elements
individually, not on the 4x4 elements (or, differently expressed K34 will
always be equal -K43, independent of any parametrization of the different
processes and in different conditions, i.e. can always be expressed by just
one value together).

So, if that is advantageous, there could rather be different propmat
classes, depending on the process. that in the RT solution (by the
abs_xsec_agenda or so, i guess) than would need to be combined into one
common ext_mat vector.


Regarding the scattering matrix issues, Robin & Patrick brought up, I think
we should discuss this independent of the propagation matrix issues here.
hence, i'm taking this in a separate mail...

wishes,
Jana


-- 
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth and Environment
SE-412 96 Gothenburg, Sweden

Email: jana.mend...@chalmers.se
Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Scattering calculations when the cloud box is switched off

2017-06-13 Thread Jana Mendrok
Hi,

So, in the specific issue raised by Jacob, I vote for simply going back to
> the old behaviour. It is formally correct to get a clear sky result in a
> calculation without cloud box,


rolled that back (for all doit, disort, and rt4 interfaces).


> so let’s trust the user that he knows what he is doing.


yeah, well, my trust in "the user" was challenges lately when even an
expert user boldly overwrote default settings for a scattering solver
without sufficient thought of the effects that might have. ;)
(I admit that that might have been a bad choice of user parameter at all
and/or of lack of sufficiently warning documentation made by me.)

wishes,
Jana

-- 
=========
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth and Environment
SE-412 96 Gothenburg, Sweden

Email: jana.mend...@chalmers.se
Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Scattering calculations when the cloud box is switched off

2017-06-07 Thread Jana Mendrok
Hi Jakob,

thanks for your feedback!
it was me who did that change. For the reason you also identified - that
otherwise it easily goes unnoticed that actually no scattering has been
done. This actually happened to me a few times. And considering that when
calling the scattering solver, the user intends to actually perform a
scattering calculation. I understand your issues, though.

Spontaneously, I don't see an option that satisfies both. Below a couple of
options I can think of to deal with this issue (in the ps some option that
you yourself could apply. without changes on the official code). Would
appreciate feedback from other developers (and users), what you prefer,
what is considered more important (my issues of course seem more important
- to me. very subjective.). Or maybe you have better ideas how to solve
that conflict.

so, code-wise we could (either):

- generally go back to the old behaviour.

- stay with the new behaviour.

- introduce a ("robust"?) option to allow the user to control the
no-cloudbox behaviour.

- make cloudboxSetAutomatocally behave differently for clearsky cases
(return a minimal cloudbox? and maybe let the user control which behaviour
- minimal or no cloudbox - is applied?).

wishes,
Jana


ps. Some options, you yourself have, Jakob:

- you can of course locally remove the newly introduced error throwing and
go back to the old behaviour in your own ARTS compilation.

- with the current version (no-cloudbox throws error) you could make a
"cloudy" run (with empty results for the pure clearsky cases) and an
explicit clearsky run and postprocess the results to whatever you need.

- you could use a manually set cloudbox (that can be better for some study
setups anyways. ensures better comparability between different cases as
then they equally affected by scattering solver errors (sphericity,
vertical resolution, interpolation, etc.))


On Wed, Jun 7, 2017 at 1:26 PM, Jakob Sd  wrote:

> Hi,
>
> recently there has been a change in the way DOIT and DISORT handle
> atmospheres where the cloud box is switched off (cloudbox_on = 0). Before,
> they just skipped the scattering calculation, threw a warning, and
> everything was ok, as the clear-sky calculations afterwards took care of
> it.
> But now, they throw a runtime error, which means that the calculation is
> stopped and the results will be empty for that atmosphere. I understand
> that this runtime error makes sense if someone wants to calculate with
> scattering but by mistake switches off the cloud box. But if someone has a
> batch of atmospheres from which some are clear sky atmospheres and uses
> cloudboxSetAutomatically, this can be quite uncomfortable, because all the
> clear sky atmospheres that were correctly calculated before, are now empty
> and the user has to manually select those atmospheres from his batch and
> calculate them using clear sky ARTS.
>
> Greetings from Hamburg,
>
> Jakob
>
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>
>


-- 
=
Jana Mendrok, Ph.D. (Researcher)
Chalmers University of Technology
Department of Space, Earth and Environment
SE-412 96 Gothenburg, Sweden

Email: jana.mend...@chalmers.se
Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Science with ARTS...

2016-09-15 Thread Jana Mendrok
argh, forget it.
just seen that it's on the webpage already (wow, Oliver!).

On Fri, Sep 16, 2016 at 8:53 AM, Jana Mendrok 
wrote:

> good morning,
>
> just stumbled upon the one below. in case we like to have some more exotic
> stuff (in the sense: fancy typesetting though i can not read this) on the
> ARTS webpage ;)
>
> unfortunately, it's slightly off with the ARTS papers it cites (ARTS 1)
> for what it seems to be about (clouds & scattering).
>
> http://wulixb.iphy.ac.cn/CN/10.7498/aps.65.134102
>
> have a nice day!
> Jana
>
>
> --
> =========
> Jana Mendrok, Ph.D. (Project Assistent)
> Chalmers University of Technology
> Earth and Space Sciences
> SE-412 96 Gothenburg, Sweden
>
> Phone : +46 (0)31 772 1883
> =====
>



-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Science with ARTS...

2016-09-15 Thread Jana Mendrok
good morning,

just stumbled upon the one below. in case we like to have some more exotic
stuff (in the sense: fancy typesetting though i can not read this) on the
ARTS webpage ;)

unfortunately, it's slightly off with the ARTS papers it cites (ARTS 1) for
what it seems to be about (clouds & scattering).

http://wulixb.iphy.ac.cn/CN/10.7498/aps.65.134102

have a nice day!
Jana


-- 
=====
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] scat_data issues

2016-08-28 Thread Jana Mendrok
> i kept & extended scat_dataCheck and do the checks for NaN, neg. values
> and Z norm in there.
> cloudbox_checkedCalc calls scat_dataCheck, but has the option to skip it
> completely or to skip the Z norm check therein (oops, haven't updated the
> doc yet. is coming soon.)
>

doc done now, too.

note: the update of cloudbox_checkedCalc might make some of your ARTS scatt
calc setups to crash due to the introduced default check on the scattering
matrix normalization. in this case, either set sca_mat_threshold to a
higher value (default=1e-2) or skip the normalization test by setting
scat_data_check to something else but 'none' and 'all' (generally, we
discourage switching off the test, though).

wishes,
Jana


ps.

> Regarding normalisation, how big difference is there between
>>>>> quadrature rules? 1%, 10% or 100%? Seems reasonable to at least
>>>>> check that normalisation is OK inside a factor of 2. (With an
>>>>> option to deactive this, if you use a solver anyhow checking
>>>>> this.)
>>>>>
>>>>
it not trivial to separate (pure) quadrature issues from grid density (when
grid density is high enough, trapezoidal integration is fine...). in ARTS
we hadn't used anything but trapezoidal integration before (re-)adding
interfaces to DISORT and RT4 and i actually didn't go that far to check how
good their quad methods perform on our standard data with its equidistant
angle grids.

however, what i have seen is that scattering matrix norm deviated by up to
10% from value expected from the ext-abs difference. generally, this tended
to be worse for larger particles, if i remember correctly (which is
straight forward as larger particles exhibit a narrower & stronger forward
peak, which is hard to catch on equidistant grids).

btw, the normalization threshold used by scat_dataCheck (and now also
cloudbox_checkedCalc) is not directly an absolute or relative deviation of
the scatt matrix norm. instead, we use the deviation rescaled by the
extinction, which is effectively the absolute deviation in the scattering
albedo.
This was chosen in order to avoid unnecessarily high demands on
low-scattering particles (when scattering is low, numerical issues might
occur and trigger a relative-norm-deviation threshold, while the scattering
contribution from these particles is low. an absolute-norm-deviation
threshold is hard to determine as its relevance strongly depends on the
total extinction).

i've now set the norm threshold default to 1%.
a part of the single scattering data in arts-xml-data actually does not
pass the check with this threshold. as far as i've tested them, they all
pass with threshold=5%.
i'm not settled yet, whether i just will increase the default threshold. or
whether it's more appropriate to post-process/modify the data (this goes
back to the quadrature/grid density problem. the single scatt data for each
individual angle gridpoint is very likely correct.).




-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] scat_data issues

2016-08-28 Thread Jana Mendrok
done.

i kept & extended scat_dataCheck and do the checks for NaN, neg. values and
Z norm in there.
cloudbox_checkedCalc calls scat_dataCheck, but has the option to skip it
completely or to skip the Z norm check therein (oops, haven't updated the
doc yet. is coming soon.)
in this way, one can use scat_dataCheck separately to ensure/test a set of
scat_data, and later confidently skip the cloudbox_checkedCalc checks on
scat_data.

wishes,
Jana

On Fri, Aug 26, 2016 at 4:34 PM, Jana Mendrok 
wrote:

> Hi,
>
> thanks for the input, Stefan & Patrick!
>
> I favor Patrick's suggestion to make more rigid scat_data checks part of
> cloudbox_checkedCalc (mainly because scat_data along with pnd_field and
> cloudbox_limits defines the cloudbox or, better said, the interface to the
> scatt solvers. havin some tests already done and not breaking controlfiles
> are some nice additions.). if we really want to have a separate
> scat_data_checkedCalc (or so), we can still separate that later anyways.
>
> So, i'll add some code. With default Nan/neg and scamat norm checks, but
> either with a "know-what-i'm-doing" switch-off option.
>
> thanks & wishes,
> Jana
>
> On Fri, Aug 26, 2016 at 3:30 PM, Patrick Eriksson <
> patrick.eriks...@chalmers.se> wrote:
>
>> Hi Stefan,
>>
>> In general I agree to not put all checks in the same function. But this
>> is not a clear cut case. The size of scat_data shall be consistent with
>> pnd_field, that in its turn should be consistent with cloudbox_limits. So
>> no clear place where to separate scat_data and other cloudbox checks.
>>
>> In fact cloudbox_checkCalc already contains some checks of scat_data.
>> Which makes sense for me. scat_data, pnd_field and cloudbox_limits are the
>> main "cloudbox variables".
>>
>> Another reason that if we introduce scat_data_check, we must add it to a
>> number of ARTS WSMs. And modify many many cfiles. Just  personally probably
>> need modify about 50 scripts if scata_dataCheck becomes mandatory.
>>
>> Bye,
>>
>> Patrick
>>
>>
>>
>>
>>
>> On 2016-08-26 13:38, Stefan Buehler wrote:
>>
>>> Hi all,
>>>
>>> I agree with Patrick that a mandatory check method is the best
>>> solution. :-)
>>>
>>> But modernising scat_dataCheck and making it mandatory (via a
>>> _checked flag) seems cleaner to me than including it in the cloudbox
>>> check. Better not to entangle issues that can be regarded separately.
>>>
>>>
>>> These check functions seem to have evolved as a general strategy, I
>>> think they are quite intuitive and user friendly. More so if their
>>> scope is limited and clear, less if there is a single huge check
>>> function that does all kinds of things.
>>>
>>> All the best,
>>>
>>> Stefan
>>>
>>> P.s.: Jana, I think this is really great work, and very timely.
>>> Oliver and I spent some time yesterday looking at your test case and
>>> debugging the scat data merging method. So, the comparison of the
>>> different solvers has already been fruitful to squash bugs.
>>>
>>> On 26 Aug 2016, at 08:59, Patrick Eriksson
>>>>  wrote:
>>>>
>>>> Hi Jana,
>>>>
>>>> I did not have scat_dataCheck active in my memory. I think this is
>>>> a lesson that non-mandatory stuff will be forgotten. To avoid nasty
>>>> errors and incorrect results, we should make most basic checks
>>>> mandatory.
>>>>
>>>> My suggestion is then to extend cloudbox_checkedCalc. My view on
>>>> cloudbox_checkedCalc is that the tests we discuss are in fact
>>>> inside the scope of the WSM. So just strange that they are not
>>>> already there!
>>>>
>>>> (If some tests turn out to be computationally demanding, then I
>>>> prefer to have option flags of type "I know what I am doing", to
>>>> deactive the checks.)
>>>>
>>>> Regarding normalisation, how big difference is there between
>>>> quadrature rules? 1%, 10% or 100%? Seems reasonable to at least
>>>> check that normalisation is OK inside a factor of 2. (With an
>>>> option to deactive this, if you use a solver anyhow checking
>>>> this.)
>>>>
>>>> Bye,
>>>>
>>>> Patrick
>>>>
>>>>
>>>>
>>>> On 2016-08-25 20:10, Jana Mendrok wrote:
>>>>
>>>>> Hi,
>>&

Re: [arts-dev] scat_data issues

2016-08-26 Thread Jana Mendrok
Hi,

thanks for the input, Stefan & Patrick!

I favor Patrick's suggestion to make more rigid scat_data checks part of
cloudbox_checkedCalc (mainly because scat_data along with pnd_field and
cloudbox_limits defines the cloudbox or, better said, the interface to the
scatt solvers. havin some tests already done and not breaking controlfiles
are some nice additions.). if we really want to have a separate
scat_data_checkedCalc (or so), we can still separate that later anyways.

So, i'll add some code. With default Nan/neg and scamat norm checks, but
either with a "know-what-i'm-doing" switch-off option.

thanks & wishes,
Jana

On Fri, Aug 26, 2016 at 3:30 PM, Patrick Eriksson <
patrick.eriks...@chalmers.se> wrote:

> Hi Stefan,
>
> In general I agree to not put all checks in the same function. But this is
> not a clear cut case. The size of scat_data shall be consistent with
> pnd_field, that in its turn should be consistent with cloudbox_limits. So
> no clear place where to separate scat_data and other cloudbox checks.
>
> In fact cloudbox_checkCalc already contains some checks of scat_data.
> Which makes sense for me. scat_data, pnd_field and cloudbox_limits are the
> main "cloudbox variables".
>
> Another reason that if we introduce scat_data_check, we must add it to a
> number of ARTS WSMs. And modify many many cfiles. Just  personally probably
> need modify about 50 scripts if scata_dataCheck becomes mandatory.
>
> Bye,
>
> Patrick
>
>
>
>
>
> On 2016-08-26 13:38, Stefan Buehler wrote:
>
>> Hi all,
>>
>> I agree with Patrick that a mandatory check method is the best
>> solution. :-)
>>
>> But modernising scat_dataCheck and making it mandatory (via a
>> _checked flag) seems cleaner to me than including it in the cloudbox
>> check. Better not to entangle issues that can be regarded separately.
>>
>>
>> These check functions seem to have evolved as a general strategy, I
>> think they are quite intuitive and user friendly. More so if their
>> scope is limited and clear, less if there is a single huge check
>> function that does all kinds of things.
>>
>> All the best,
>>
>> Stefan
>>
>> P.s.: Jana, I think this is really great work, and very timely.
>> Oliver and I spent some time yesterday looking at your test case and
>> debugging the scat data merging method. So, the comparison of the
>> different solvers has already been fruitful to squash bugs.
>>
>> On 26 Aug 2016, at 08:59, Patrick Eriksson
>>>  wrote:
>>>
>>> Hi Jana,
>>>
>>> I did not have scat_dataCheck active in my memory. I think this is
>>> a lesson that non-mandatory stuff will be forgotten. To avoid nasty
>>> errors and incorrect results, we should make most basic checks
>>> mandatory.
>>>
>>> My suggestion is then to extend cloudbox_checkedCalc. My view on
>>> cloudbox_checkedCalc is that the tests we discuss are in fact
>>> inside the scope of the WSM. So just strange that they are not
>>> already there!
>>>
>>> (If some tests turn out to be computationally demanding, then I
>>> prefer to have option flags of type "I know what I am doing", to
>>> deactive the checks.)
>>>
>>> Regarding normalisation, how big difference is there between
>>> quadrature rules? 1%, 10% or 100%? Seems reasonable to at least
>>> check that normalisation is OK inside a factor of 2. (With an
>>> option to deactive this, if you use a solver anyhow checking
>>> this.)
>>>
>>> Bye,
>>>
>>> Patrick
>>>
>>>
>>>
>>> On 2016-08-25 20:10, Jana Mendrok wrote:
>>>
>>>> Hi,
>>>>
>>>> i'm currently implementing an interface to the RT4 solver and am
>>>> testing it. that was at least the original plan. partly it
>>>> however turns out to be more of "fall into the traps" and
>>>> "stumble upon issues" with other the solver (which i intended to
>>>> use as reference)...
>>>>
>>>> current issue i stumbled upon is that there seems to be no
>>>> (sufficiently) rigid test on validity (or at least
>>>> eligibility/adequacy/proper qualification) of the scattering
>>>> data (scat_data).
>>>>
>>>> i've created my scat_data from ARTS' TMatrix interface. Happened
>>>> that one of the particles was too challenging for TMatrix and
>>>> produced a couple of NaN and also negative extinction and
>>>>

[arts-dev] scat_data issues

2016-08-25 Thread Jana Mendrok
Hi,

i'm currently implementing an interface to the RT4 solver and am testing
it. that was at least the original plan. partly it however turns out to be
more of "fall into the traps" and "stumble upon issues" with other the
solver (which i intended to use as reference)...

current issue i stumbled upon is that there seems to be no (sufficiently)
rigid test on validity (or at least eligibility/adequacy/proper
qualification) of the scattering data (scat_data).

i've created my scat_data from ARTS' TMatrix interface. Happened that one
of the particles was too challenging for TMatrix and produced a couple of
NaN and also negative extinction and absorption coefficients (K11 and
alpha1). while NAN could be avoided (equ. to a TMatrix fail), it's hardly
possible for the negatives (they are "regular" TMatrix output.

ARTS scatt solvers reacted very different on the presence of these invalid
data:
- RT4 gave a runtime error due to scatt matrix normalization being too far
off (guess, Disort would do the same. wasn't tested here as I used oriented
particles, which aren't handled by Disort).
- DOIT ran through providing results that looked not immediately suspicious
:-O
- MC ran into an assertion within an interpolation.

that's quite unsatisfactory, i think, and should be handled in some
consistent manner, i think. question is how.
some ideas below. do you have some further ideas or suggestions or comments?

appreciate any input.
wishes,
Jana


my thoughts/ideas:

- leave it to each solver to check for that (but then we need to go over
them to do that)?

- make an additional check method and another check variable for the
scat_data?
there is already a scat_dataCheck WSM. which is rarely used. it e.g. checks
that scat_data cover the required frequencies, but also the scatt matrix
normalization, the latter only available for random orientation, though. in
my experience it hasn't been too helpful (data coming from atmlab Mie
interface - as well as from ARTS' Tmatrix interface as i learned these days
- frequently don't pass the normalisation check. which is beside others due
to the type of quadrature used. according to my experience, such norm
issues are better handled by each solver separately), and since it's not
mandatory, i avoid it.
but it would be an option to modify this (make the norm check optional,
instead check for nan and negative values) and make it mandatory (through a
checked flag).

- an issue is, of course, that one does not really want to check data, that
is frequently used, each time again (e.g. the arts-xml-data contents, data
from the future ice/snow SSP database we are creating...) for those invalid
entries. so maybe giving the data structure itself a flag and providing a
WSM that does that checking and resets the flag? the above checked method
could e.g. look for this flag and apply the whole check suite only on so
far unchecked data.



-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] Question about ARTS

2016-05-04 Thread Jana Mendrok
Hi Li,

some rough guessing on your issues:

1) increasing T by 500K seems quite drastic and might yield some unexpected
effects (T affects both the line strength and the line shape! also note
that ARTS' default partition functions have been fit over T of 150-300K
only, ie they might go haywire at such high temps.). did you give it a try
with more reasonable values?

2) generally, ARTS should be able to simulate mid-IR signals. first thing
catching my eye in your setup is that you use H2O (and O2) absorption lines
from line database AND the PWR full absorption model on top. in addition,
PWR is exclusively for microwaves (up to 300 or 1000GHz). i have no idea,
how that behaves at mIR frequencies.
i wonder why there's no error thrown when using PWR at such far off
frequencies. but i guess, we have thought about it before and decided there
shouldn't e.g. for performance reasons...
try using the CKD_MT continuum instead (of PWR). also, make the frequency
margin on the lines considered from the line file wider. for H2O the
margins should be 750GHz when using CKD_MT (and wider if no continuum is
applied).

hope that helps.
best wishes,
Jana

On Wed, May 4, 2016 at 8:03 AM, 李小英  wrote:

> Dear Sir:
>
>
> I am so sorry that I have some questions about ARTS again.
> I use some figures to explain my questions in a document file, which is
> attached in this email. I appreciate your help.
>
>
> Best regards!
>
> Yours sincerely.
>
> Xiaoying Li
>
>
>
> --
>
> Li Xiaoying
>
> Associate Professor
>
> Institute of Remote Sesing and Digital Earth, CAS
>
> State Key Laboratory of Remote Sensing Science
>
> P.O.Box9718,Beijing 100101,China
>
>
>
>
>
>
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>
>


-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


Re: [arts-dev] bug or feature: metmm freq_number vs. freq_spacing

2016-03-03 Thread Jana Mendrok
Hi Stefan,

I do understand that the WSM f_gridMetMM provides the tighter of the two.

but aren't the sensor descriptions with freq_number set depending on
metmm_accuracy levels tailor-made with respect to speed and a threshold
accuracy? why would we prepare them first to then overwrite them (e.g.
slowing down the "very very fast" setup significantly at least for
ICI/ISMAR, and occasionally forcing the f_grid to contain more points than
necessary for the intended and documented accuracy threshold)? does not
really make sense to me.

i assumed metmm_accuracy=0 should ALWAYS setup a f_grid with exactly one
grid point per passband. the user can still overwrite this of course by
(re-)setting either of freq_number and freq_spacing. but here, we provide a
setting that overwrites itself. as written, seems odd to me.

couldn't find this behaviour in documentation neither. the amsu-metmm
report says "The first configuration selects 1 frequency in the middle of
every channel. The second, third and fourth configuration select different
numbers of frequencies for every channel." and doesn't ever mention spacing.

but if you're sure that's what you want, i stop complaining ;-)
wishes,
Jana

On Thu, Mar 3, 2016 at 6:15 PM, Stefan Buehler <
stefan.bueh...@uni-hamburg.de> wrote:

> Hi Jana,
>
> this behaviour is intentional, you are guaranteed to get a least the
> frequency number and spacing you specified. The tighter constraint wins.
>
> Best wishes,
>
> Stefan
>
> > On 02 Mar 2016, at 14:06, Oliver Lemke 
> wrote:
> >
> > Hi Jana,
> >
> > In general, the higher accuracy always wins. So, if the bandwidth
> divided by freq_numbers is larger than freq_spacing, the latter will be
> used.
> >
> > Whether the behaviour you see with metmm_accuracy is intentional, I
> leave to Alex to comment on.
> >
> > cheers,
> > /oliver
> >
> >
> >> On 2 Mar 2016, at 10:46, Jana Mendrok  wrote:
> >>
> >> Hi,
> >>
> >> when running the metmm system with the provided sensor setups, i had
> expected that the set freq_numbers (depending on metmm_accuracy choice) are
> ruling, ie. it's always them that are applied.
> >> however, i found that freq_spacing is set such that it overrules
> freq_number occasionally (eg. several of the ISMAR channels with
> metmm_accuracy=0, leading to 58 f_grid points for the 15 existing ISMAR
> channels, instead of 2x15=30 I had expected).
> >>
> >> is this intended behaviour? or a bug?
> >>
> >> wishes,
> >> Jana
> >
> > ___
> > arts_dev.mi mailing list
> > arts_dev.mi@lists.uni-hamburg.de
> > https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>
> ___
> arts_dev.mi mailing list
> arts_dev.mi@lists.uni-hamburg.de
> https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi
>



-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi


[arts-dev] Fwd: bug or feature: metmm freq_number vs. freq_spacing

2016-03-02 Thread Jana Mendrok
Hi,

when running the metmm system with the provided sensor setups, i had
expected that the set freq_numbers (depending on metmm_accuracy choice) are
ruling, ie. it's always them that are applied.
however, i found that freq_spacing is set such that it overrules
freq_number occasionally (eg. several of the ISMAR channels with
metmm_accuracy=0, leading to 58 f_grid points for the 15 existing ISMAR
channels, instead of 2x15=30 I had expected).

is this intended behaviour? or a bug?

wishes,
Jana



-- 
=
Jana Mendrok, Ph.D. (Project Assistent)
Chalmers University of Technology
Earth and Space Sciences
SE-412 96 Gothenburg, Sweden

Phone : +46 (0)31 772 1883
=
___
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi