Dear list members,

the slides of the following tutorials can be downloaded from the web
site of the 3rd International Symposium on Imprecise Probabilities and
Their Applications:


Jean-Marc Bernard: Imprecise Dirichlet model for multinomial data
(http://www.sipta.org/~isipta03/jean-marc.pdf).
ABSTRACT:
The Imprecise Dirichlet Model (IDM) is a model for statistical
inference and coherent learning from multinomial data, and, more
generally, for categorical data under various sampling models. The IDM
was proposed by Walley (1996, JRSS B, 58 No.~1, 3--57) as an
alternative to other objective approaches to inference, since it aims
at modeling prior ignorance about the unknown chances $\theta$ of a
multinomial process. The IDM is an imprecise probability model in
which prior uncertainty about $\theta$ is described by a set of prior
Dirichlet distributions. The set of priors is updated, by the means of
Bayes' theorem, into a set of Dirichlet posterior distributions, so
that the IDM can be viewed as a generalization of Bayesian conjugate
analysis. As in any imprecise probability model, inferences can be
summarized by computing upper and lower probabilities for any event of
interest. The IDM induces prior ignorance (characterized by maximally
imprecise probabilities) about $\theta$ and many other derived
parameters. The IDM has many advantages over alternative objective
inferential models. It satisfies several general principles for
inference which no other model jointly satisfies: symmetry, coherence,
likelihood principle, and other desirable invariance principles. By
conveniently choosing its hyperparameter $s$ (which determines the
extent of imprecision), the IDM can be tailored to encompass
alternative objective models, either frequentist or Bayesian. After
presenting the IDM, both from the parametric viewpoint (inferences
about $\theta$) and the predictive viewpoint (inferences about future
observations), we shall review its major properties, and then focus on
applications of the IDM for various statistical problems.


Gert de Cooman: A gentle introduction to imprecise probability models
and their behavioral interpretation
(http://www.sipta.org/~isipta03/gert.pdf).
ABSTRACT:
The tutorial will introduce basic notions and ideas in the theory of
imprecise probabilities. It will highlight the behavioural
interpretation of several types of imprecise probability models, such
as lower previsions, sets of probability measures and sets of
desirable gambles; as well as their mutual relationships. Rationality
criteria for these models, based on their interpretation, will be
discussed, such as avoiding sure loss and coherence. We also touch
upon the issues of conditioning, and decision making using such
models.


Fabio G. Cozman: Graph-theoretical models for multivariate modeling
with imprecise probabilities
(http://www.sipta.org/~isipta03/fabio.pdf).
ABSTRACT:
Markov chains, Markov fields, Bayesian networks, and influence
diagrams are often used to construct standard probability
models. These models share the property that they are based on
graphs. We ask, how do these models behave when probability values are
imprecise? What are the independence concepts at play, and what are
the computational tools that we could use to manipulate the resulting
models? This tutorial will describe results that have been obtained in
recent years, mostly in the field of artificial intelligence,
concerning graphical models and imprecise probabilities. Most results
have focused on directed acyclic graphs, with interesting applications
ranging from classification to sensitivity analysis in expert systems.


Charles F. Manski: Partial identification of probability distributions
(http://www.sipta.org/~isipta03/charles.pdf).
ABSTRACT:
This tutorial exposits elements of the research program presented in
Manski, C., Partial Identification of Probability Distributions,
Springer-Verlag, 2003. The approach is deliberately conservative. The
traditional way to cope with sampling processes that partially
identify population parameters has been to combine the available data
with assumptions strong enough to yield point identification. Such
assumptions often are not well motivated, and empirical researchers
often debate their validity. Conservative analysis enables researchers
to learn from the available data without imposing untenable
assumptions. It also makes plain the limitations of the available
data. Whatever the particular subject under study, the approach
follows a common path. One first specifies the sampling process
generating the available data and ask what may be inferred about
population parameters of interest in the absence of assumptions
restricting the population distribution. One then asks how the
(typically) set-valued identification regions for these parameters
shrink if certain assumptions (e.g., statistical independence or
monotonicity assumptions) are imposed. Major areas of application
include regression with missing outcome or covariate data, analysis of
treatment response, and decomposition of probability mixtures.


Sujoy Mukerji: Imprecise probabilities and ambiguity aversion in
economic modeling (http://www.sipta.org/~isipta03/sujoy.pdf).
ABSTRACT:
The talk will have, roughly, two parts. The first part will give an
introductory account of decision theoretic frameworks, useful in
economic modeling, that incorporate the hypothesis that cognitive
limitations may imply that decision makers' beliefs are represented by
imprecise probabilities. The second part will discuss some examples of
economic modeling that apply such frameworks.


The slides of the following invited talks are also available:

Terrence L. Fine, Theories of Probability: Some Questions about
Foundations (http://www.sipta.org/~isipta03/terry.pdf).
ABSTRACT:
We consider some of the following questions and offer some thoughts
but no answers.  How do we recognize probabilistic reasoning and its
armature of probability theory?  How is the study of probabilistic
reasoning distinguished from study of other forms of indeterminacy,
imprecision, and vagueness?  Methodology or theory?  What counts as a
theory of probability and what does not?  Is there a unified concept
of probability?  Is probability fundamental or is it merely a
convenient placeholder for a more detailed account?  Can we judge
``adequacy'' (satisfaction, success) outside of the very
methodology/theory of probability we are using?  Is a pragmatic stance
sufficient or merely defeatist?  Is self-consistency sufficient or at
most necessary?  What are examples of domains, however small, and
probability theories for them that are unproblematic?  What are
examples of conceptual frameworks or spaces within which to have this
discussion?


Irving J. Good, The Accumulation of Imprecise Weights of Evidence
(http://www.sipta.org/~isipta03/jack.pdf).
ABSTRACT:
A familiar method for modeling imprecise or partially ordered
probabilities is to regard them as interval valued. It is proposed
here that it is better to assume a Gaussian form for the logarithm of
the probabilities. To fix the hyperparameters of the Gaussian curve
one could make judgements for the quartiles for example. The same
comment applies for weights of evidence. The reason for this proposal
is that when the pieces of evidence are statistically independent one
has additivity and the addition of Gaussian curves is easy to
perform. When the pieces of evidence are dependent, there is a more
general additivity, or one might be able to allow for interactions of
various orders. Possible applications would be to legal trials and to
differential diagnosis in medicine, or even for distinguishing between
two hypotheses in general.


Patrick Suppes, Application of Nonmonotonic Upper Probabilities to
Quantum Entanglement (http://www.sipta.org/~isipta03/patrick.pdf).
ABSTRACT:
A well-known property of quantum entanglement phenomena is that random
variables representing the observables in a given experiment do not
have a joint probability distribution. The main point of this lecture
is to show how a generalized distribution, which is a nonmonotonic
upper probability distribution, can be used for all the observables in
two important entanglement cases: the four random variables or
observables used in Bell-type experiments and the six correlated spin
observables in three-particle GHZ-type experiments. Whether or not
such upper probabilities can play a significant role in the conceptual
foundations of quantum entanglement will be discussed.


Best wishes,
Marco Zaffalon

-----------------------------------------
Dr. Marco Zaffalon
Senior Researcher


IDSIA
Galleria 2
CH-6928 Manno (Lugano)
Switzerland


phone       +41 91 610 8665
fax         +41 91 610 8661
email       mailto:[EMAIL PROTECTED]
web         http://www.idsia.ch/~zaffalon
-----------------------------------------

Reply via email to