We are pleased to announce the release of PRISM1.8, which is available for download at:
http://sato-www.cs.titech.ac.jp/prism/. PRISM is a logic-based probabilistic language and is easy to learn and use for users who are familiar with Prolog. The most notable feature of PRISM is that it allows the user to define random switches and use them to make probabilistic choices. The probability distributions of switches can be learned automatically from samples. PRISM is suited for building complex systems that involve both symbolic and probabilistic elements such as discrete hidden Markov models, stochastic string/graph grammars, game analysis, data mining, performance tuning and bio-sequence analysis. PRISM1.8 improves upon the previous version 1.7 in the following aspects: * Significant improvement of learning speed. For instance, learning on a real context free grammar with 860 productions using a corpus of 10000 sentences, which was impossible before with version 1.7, can be completed in only 10 minutes on a PC. * While early versions accepted only failure-free and negation-free programs, the new version is able to handle failure and negation. Negation is compiled away automatically when programs are loaded. As a result, we can describe models with failure constraints such as the agreement in number between subjects and verbs. In addition, learning can be conducted with negative samples. PRISM, in general, enjoys the following features: (1) The user can use programs to define distributions over terms and atoms. Mathematically a PRISM program is a formalism, which defines a probability measure over the set of possible Herbrand interpretations. The distributions are derived and computed from the defined measure. There are no restrictions on programs, e.g., programs are not required to be range-restricted. (2) Parameters in a program are learnable automatically from examples. A PRISM program contains statistical parameters that reflect the statistical properties of the model. They can be automatically estimated from examples by ML (Maximum Likelihood) estimation performed by a built-in EM learning routine. (3) Probabilities are computed efficiently in a dynamic programming manner. PRISM uses "explanation graphs" to compute probabilities and learn parameters, where solutions are shared as in dynamic programming. PRISM is implemented on top of B-Prolog and tabled search of B-Prolog is used to construct explanation graphs. (4) PRISM is a high level yet efficient modeling language. Popular symbolic-statistical models such as hidden Markov models, probabilistic context free grammars and Bayesian nets can be described in PRISM in a very compact way. Parameter learning in PRISM can be done as efficiently as those by specialized EM algorithms such as the Baum-Welch algorithm. In addition, PRISM can be used to model certain phenomena that are hard to model using the specialized statistical tools. PRISM is sustainable to relatively large sets of data and would be of interest to anyone who would like to challenge statistical modeling of complex phenomena. With best regards, Taisuke Sato and Neng-Fa Zhou