Commit ef8b4164 authored by Steven Cordwell's avatar Steven Cordwell
Browse files

wrote README

parent 274b9297
Markov Decision Process (MDP) Toolbox 4.0 for Python
====================================================
The MDP toolbox provides classes and functions for the resolution of
descrete-time Markov Decision Processes. The list of algorithms that have been
implemented includes backwards induction, linear programming, policy iteration,
q-learning and value iteration along with several variations.
Documentation
-------------
Documentation is available as docstrings in the module code and as html in the
doc folder or from `the MDPtoolbox homepage <http://www.>`_.
Installation
------------
1. Download the stable release from http:// or get a local copy of the
source with Git
``git clone https://code.google.com/p/pymdptoolbox/``
2. If you downloaded the `*.zip` or `*.tar.gz` archive, then extract it
``tar -xzvf pymdptoolbox.tar.gz``
``unzip pymdptoolbox``
3. Change to the MDP toolbox directory
``cd pymdptoolbox``
4. Install via Docutils either to the filesystem or to a home directory
``python setup.py install``
``python setup.py install --home=<dir>``
Quick Use
---------
Start Python in your favourite way. Then follow the example below to import the
module, set up an example Markov decision problem using a discount value of 0.9,
and solve it using the value iteration algorithm.
>>> import mdp
>>> P, R = mdp.exampleForest()
>>> vi = mdp.ValueIteration(P, R, 0.9)
>>> vi.iterate()
>>> vi.policy
(0, 0, 0)
......@@ -3,11 +3,11 @@
from distutils.core import setup
setup(name="PyMDPtoolbox",
version="0.14",
version="4.0alpha1",
description="Python Markov Decision Problem Toolbox",
author="Steven Cordwell",
author_email="steven.cordwell@uqconnect.edu.au",
url="http://code.google.com/p/pymdptoolbox/",
license="New BSD License",
py_modules=["mdp"],
requires=["math", "numpy", "random", "scipy", "time"],)
\ No newline at end of file
requires=["math", "numpy", "random", "scipy", "time"],)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment