Commit 6dbe527c authored by Steven Cordwell's avatar Steven Cordwell

update to HISTORY and setup.py ready for v0.10 tag

parent 0660caca
2013-01-25
v0.10 - the RelativeValueIteration class has been completed and fulfils the requirements to bump up the version number.
>>> import mdp
>>> P, R = mdp.exampleForest()
>>> rvi = mdp.RelativeValueIteration(P, R) # this algorithm does not use discounting
>>> rvi.iterate() # runs the algorithm
>>> rvi.policy # to get the optimal policy
(0, 0, 0)
Pre 2013-01-25
v0.9 - the value iteration Gauss-Seidel algorithm is now in working order. The class ValueIterationGS should be stable and usable. Use like this:
>>> import mdp
>>> P, R = mdp.exampleRand(10, 3) # to create a random transition and reward matrices with 10 states and 3 actions
......
......@@ -3,11 +3,11 @@
from distutils.core import setup
setup(name="PyMDPtoolbox",
version="0.9",
version="0.10",
description="Python Markov Decision Problem Toolbox",
author="Steven Cordwell",
author_email="steven.cordwell@uqconnect.edu.au",
url="http://code.google.com/p/pymdptoolbox/",
license="Modified BSD License",
license="New BSD License",
py_modules=["mdp"],
requires=["math", "numpy", "random", "scipy", "time"],)
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment