Difference between revisions of "Computer Science : Approximate Optimization"

From bitrary
Jump to: navigation, search
Line 78: Line 78:
* [http://en.wikibooks.org/wiki/Artificial_Intelligence wikibooks, "Artificial_Intelligence"]
* [http://en.wikibooks.org/wiki/Artificial_Intelligence wikibooks, "Artificial_Intelligence"]
* [http://www.hutter1.net/sitemap.htm Marcus Hutter home page]
* [http://www.hutter1.net/sitemap.htm Marcus Hutter home page]
* [https://mloss.org/software/  mloss.org]

Latest revision as of 20:14, 10 February 2019

General Idea


If the world can be defined as a continuous function and

$objective_function(point) = max(TheContinuousFunction, point)$

then the classical calculus has solved the optimization problem to the absolute exactness. However, not all problems are that simple and often they are discrete, time series based, have probabilistic values, have conditionals, depend on states, have probabilistic, time dependent, constraints, etc. That's where approximation based optimization (Artificial Intelligence) becomes better than nothing.

What regards to the machine learning aspect of Artificial Intelligence, then that is a software development technique for constructing the function that searches for the approximately optimized strategies.


There's not a chance that I'm able to describe them here adequately, not to mention, I do not know them all. The following comments are meant to supplement the materials in the Wild Wild Web.

Game Graph

Game graph is a graph of game states. It may contain cycles.

A citation from presentation slides of the Juhan-Peep Ernits: "A mutex between two actions indicates that it is impossible to perform these actions in parallel. A mutex between two literals indicates that it is impossible to have these both literals true at this stage."

Some key phrases: topological sorting, Breadth-first search, Depth-first search,

Forward state-space search is a kind of search, where the search starts from initial state and part of the goal is to find a path from start state to the desired end state.

Backward relevant-states search is a kind of search, where the search starts from desired end state and part of the goal is to find out, how is it possible to arrive to that end state from possible start states.

Game Tree


A player chooses branches in the game tree so that its winnings at the worst case scenario are maximized and its opponents winnings at the opponent's best case scenario are minimized.

Alpha–beta Pruning

The same as the minimax, except that game tree branches that do not look promising, are omitted, not analyzed.


Planning algorithms are practically the same as games with 2 players, except that one player has left the game and the player, who is left to play, must achieve the "winning scenario" as efficiently, economically, cheaply as possible.



Research Projects

Training Data Sets

Papers and Journals



Academic Taboos