Ray Solomonoff


Ray Solomonoff
Death date
Fields of study
Kolmogorov Award


Ray Solomonoff (July 25, 1926 – December 7, 2009) was a founding father of algorithmic information theory. He developed it in his concept of probability, called algorithmic probability, as a means to overcome serious problems associated with applying Bayes rule in statistics. [1] He was also an originator of artificial intelligence, with his focus being on machine learning, prediction and probability.[2]

He was born in Cleveland, Ohio, July 25, 1926, the son of immigrants from Russia, Julius and Sarah Solomonoff. In 1944 he joined the United States Navy as Instructor in Electronics. From 1947–1951 he attended the University of Chicago, graduating with an M.S. in Physics in 1951.

Already at age of 16, in 1942, he began to search for a general method to solve mathematical problems, and in the early 1950's was interested in cybernetics (a common name then for what was later called artificial intelligence).

He was one of the original invitees at the Dartmouth Summer Research Project on Artificial Intelligence in 1956, which is often considered the event that launched artificial intelligence (AI) as a research discipline. [3] There he circulated a report on non-semantic machine learning using probability, publishing a version of it in 1957.[4]

In 1960 he reported on his invention of algorithmic probability, giving the formula used in his theory of induction and prediction. [5] and then published further, giving various examples, in Information and Control in 1964. [6]

John Markoff, in the New York Times obituary for Solomonoff, quotes Eric Horvitz, former president of the Association for the Advancement of Artificial Intelligence, “Ray did early work on the theoretical foundations of learning systems, focused on understanding how to generate and assign probabilities to sequences of symbols, which could be mapped to the challenge of predicting what comes next, given what you’ve seen so far.” The basic idea of Kolmogorov Complexity, which deals with the shortest effective description length of objects was part of his General Theory.

Kolmogorov writing in 1965, [7] focused on randomness of a string, its structure, as did Gregory Chaitin, who in 1968 also independently described complexity [8] while Solomonoff focused on induction and prediction of how the string might continue. Kolmogorov complexity is sometimes referred to as Solomonoff-Kolmogorov-Chaitin complexity.

Solomonoff wrote many papers, and gave many talks about how to develop machine learning using this form of conditional probability with the goal of machine intelligence.

Throughout his career Solomonoff was concerned with the potential benefits and dangers of AI, discussing it in many of his published reports. In 1985 he analyzed a likely evolution of AI, giving a formula predicting when it would reach the "Infinity Point".[9] This work is part of the history of thought about a possible technological singularity.

In 2003 he was the first recipient of the Kolmogorov Award by The Computer Learning Research Center at the Royal Holloway, University of London, where he gave the inaugural Kolmogorov Lecture, "The Universal Distribution and Machine Learning." [10]

The IEEE Information Theory Society Newsletter, March 2011 In Memorium for him by Peter Gács and Paul Vitányi, [11] gives a good description of his life’s work, saying “His original ideas helped start the thriving research areas of algorithmic information theory and algorithmic inductive inference. His scientific legacy is enduring and important.”

Further Reading

Ray Solomonoff’s papers are at his website (raysolomonoff.com) at: http://raysolomonoff.com/publications/pubs.html

A pdf of the original 1956 Dartmouth Report along with other information is at the Dartmouth AI Archives of Solomonoff's website at: http://raysolomonoff.com/dartmouth/dart.html.

A pdf of the 1964 "A Formal Theory of Inductive Inference, Part I and Part II" Information and Control, 1964 is at: http://raysolomonoff.com/publications/1964pt1.pdf and http://raysolomonoff.com/publications/1964pt2.pdf

An Autobiography of his work up to 1997 is "The Discovery of Algorithmic Probability," in Journal of Computer and System Sciences, Vol 55, No. 1, pp. 73–88. http://raysolomonoff.com/publications/barc97.pdf

The Kolmogorov Lecture, "The Universal Distribution and Machine Learning" is at http://raysolomonoff.com/publications/kollect.pdf

An example of one of his lectures is at: https://www.youtube.com/watch?v=CfjkqpzDkCs&t=261s


  1. Vitanyi, P., "Ray Solomonoff, Founding Father of Algorithmic Information Theory" Algorithms 2010, 3, 260-264.
  2. Markoff, J., "Ray Solomonoff, Pioneer in Artificial Intelligence, Dies at 83" (https://www.nytimes.com/2010/01/10/science/10solomonoff.html). The New York Times. Retrieved January 11, 2009.
  3. Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine, 27(4):87–91, 2006.
  4. Solomonoff, R., An Inductive Inference Machine," IRE Convention Record, Section on Information Theory, Part 2, pp. 56–62.
  5. Solomonoff, R., A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Nov. 1960.
  6. Solomonoff, R., "A Formal Theory of Inductive Inference, Part I and Part II" Information and Control, Vol 7, No. 1 pp 1–22, March 1964, and Vol 7, No. 2 pp 224–254, June 1964.
  7. Kolmogorov, A., “Three Approaches to the Quantitative Definition of Information”, Information Transmission, Vol I, pp. 3–11, 1965.
  8. Chaitin, G., On the length of programs for computing binary sequences, II. Journal of the ACM, 16:145–159, 1969.
  9. Solomonoff, R., "The Time Scale of Artificial Intelligence: Reflections on Social Effects," Human Systems Management, Vol 5, pp. 149–153, 1985
  10. Solomonoff, R., "The Universal Distribution and Machine Learning", The Kolmogorov Lecture, Feb. 27, 2003, Royal Holloway, Univ. of London. The Computer Journal, Vol. 46, No. 6, 2003
  11. Gács, P. and Vitányi, P., "Raymond J. Solomonoff 1926–2009", IEEE Information Theory Society Newsletter, February 7, 2011 pp 11-16