Table of Contents
Here, we introduce how to implement a simple mountain-car task with SkyAI. The mountain-car task has a continuous state and a continuous action, which will be implemented as a module of SkyAI. In this tutorial, we discretize the action space; thus, this tutorial is an example of continuous state/discrete action problem. As an reinforcement learning algorithm, Peng's Q(lambda)-learning is applied to the mountain-car task; of course, we use predefined modules. In order to approximate the action value function over the continuous state space, we employ the normalized Gaussian network (NGnet).
The following is the procedure:
The remarkable differences from the maze task are generating NGnet and using it in Q(lambda)-learning.
The sample code works on a console; no extra libraries are required.
In the mountain-car environment, there is a mountain, a car, and a goal.
The objective of this task is to go from the start (x=-0.5) to the goal (x>=0.6). The car can accelerate, but does not have enough power to go beyond the mountain. Thus, the car needs to climb the opposite side, then climb toward the goal by using the kickback.
The dynamics of the mountain is given as follows:
, ,
where m denotes the mass of the car (0.2), k denotes the friction factor (0.3), denotes the time step (0.01), and a denotes the acceleration of the car.
The state is a 2-dimensional vector . The action is an acceleration a chosen from a discrete set {-0.2, 0, 0.2} which is not enough to go beyond the mountain. The reward is given by:
.
Please refer to ../Tutorial - Making Module.
int NumEpisodes; // number of episodes double TimeStep; // time-step double MaxTime; // max time per episode (task is terminated after this) double Gravity; // gravity of the environment double Mass; // mass of the car double Fric; // friction factor int DispWidth; // width for displaying the environment on the console int DispHeight; // height for displaying the environment on the console int SleepUTime; // duration for display
TMountainCarTaskConfigurations (var_space::TVariableMap &mmap) : NumEpisodes (200), TimeStep (0.01), MaxTime (100.0), Gravity (9.8), Mass (0.2), Fric (0.3), DispWidth (40), DispHeight (15), SleepUTime (1000) { Register(mmap); }
ADD( NumEpisodes ); ADD( TimeStep ); ADD( MaxTime ); ADD( Gravity ); ADD( Mass ); ADD( Fric ); ADD( DispWidth ); ADD( DispHeight ); ADD( SleepUTime );
//=========================================================================================== //!\brief Mountain Car task (environment+task) module class MMountainCarTaskModule : public TModuleInterface //=========================================================================================== { public: typedef TModuleInterface TParent; typedef MMountainCarTaskModule TThis; SKYAI_MODULE_NAMES(MMountainCarTaskModule) MMountainCarTaskModule (const std::string &v_instance_name) : TParent (v_instance_name), conf_ (TParent::param_box_config_map()) { } protected: TMountainCarTaskConfigurations conf_; }; // end of MMountainCarTaskModule //-------------------------------------------------------------------------------------------
TRealVector accel_; //!< 1-dim acceleration TRealVector state_; //!< position, velocity TReal time_; TInt num_episode_;
virtual void slot_start_exec (void);Then, define it outside the class:
/*virtual*/void MMountainCarTaskModule::slot_start_exec (void) { init_environment(); signal_initialization.ExecAll(); for(num_episode_=0; num_episode_<conf_.NumEpisodes; ++num_episode_) { init_environment(); signal_start_of_episode.ExecAll(); bool running(true); while(running) { signal_start_of_timestep.ExecAll(conf_.TimeStep); running= step_environment(); show_environment(); usleep(conf_.SleepUTime); if(time_>=conf_.MaxTime) { signal_finish_episode.ExecAll(); running= false; } signal_end_of_timestep.ExecAll(conf_.TimeStep); } signal_end_of_episode.ExecAll(); } }where we used the three member functions. These are declared at the protected section:
void init_environment (void); bool step_environment (void); void show_environment (void);and, defined outside the class:
void MMountainCarTaskModule::init_environment (void) { state_.resize(2); state_(0)= -0.5; state_(1)= 0.0; accel_.resize(1); accel_(0)= 0.0; time_= 0.0l; }
bool MMountainCarTaskModule::step_environment (void) { state_(1)= state_(1) + (-conf_.Gravity*conf_.Mass*std::cos(3.0*state_(0))+accel_(0)/conf_.Mass-conf_.Fric*state_(1))*conf_.TimeStep; state_(0)= state_(0) + state_(1)*conf_.TimeStep; time_+= conf_.TimeStep; TReal reward= 0.1l*(1.0l / (1.0l + Square(0.6l-state_(0))) - 1.0l); signal_reward.ExecAll(reward); if(state_(0)<=-1.2) { state_(0)=-1.2; state_(1)=0.0; } if(state_(0)>=0.6) { signal_finish_episode.ExecAll(); return false; } return true; }
void MMountainCarTaskModule::show_environment (void) { std::cout<<"("<<state_(0)<<","<<state_(1)<<"), "<<accel_(0)<<", "<<time_<<"/"<<num_episode_<<std::endl; std::vector<int> curve(conf_.DispWidth); for(int x(0);x<conf_.DispWidth;++x) { double rx= (0.6+1.2)*x/static_cast<TReal>(conf_.DispWidth)-1.2; curve[x]= static_cast<TReal>(conf_.DispHeight-1)*0.5*(1.0-sin(3.0*rx))+1; std::cout<<"-"; } std::cout<<std::endl; int pos= static_cast<TReal>(conf_.DispWidth)*(state_(0)+1.2)/(0.6+1.2); for(int y(0);y<conf_.DispHeight;++y) { for(int x(0);x<conf_.DispWidth;++x) { if(x==pos && y==curve[x]-1) std::cout<<"#"; else if(x==conf_.DispWidth-1 && y==curve[x]-1) std::cout<<"G"; else if(y>=curve[x] || x==0) std::cout<<"^"; else std::cout<<" "; } std::cout<<std::endl; } for(int x(0);x<conf_.DispWidth;++x) std::cout<<"-"; std::cout<<std::endl<<std::endl; }
virtual void slot_execute_action_exec (const TRealVector &a) { accel_= a; } virtual const TRealVector& out_state_get (void) const { return state_; } virtual const TReal& out_cont_time_get (void) const { return time_; }
void Start() { slot_start.Exec(); }
SKYAI_ADD_MODULE(MMountainCarTaskModule)This should be written outside the class and inside the namespace loco_rabbits.
That's it.
Next, in order to test the MMountainCarTaskModule module, we make a module named MRandomActionModule that emits a random action at each step. MRandomActionModule has two ports:
Thus, its implementation is very simple:
//=========================================================================================== //!\brief Random action module class MRandomActionModule : public TModuleInterface //=========================================================================================== { public: typedef TModuleInterface TParent; typedef MRandomActionModule TThis; SKYAI_MODULE_NAMES(MRandomActionModule) MRandomActionModule (const std::string &v_instance_name) : TParent (v_instance_name), slot_timestep (*this), signal_action (*this) { add_slot_port (slot_timestep); add_signal_port (signal_action); } protected: MAKE_SLOT_PORT(slot_timestep, void, (const TReal &dt), (dt), TThis); MAKE_SIGNAL_PORT(signal_action, void (const TRealVector &), TThis); virtual void slot_timestep_exec (const TReal &dt) { static int time(0); static TRealVector a(1); if(time%50==0) switch(rand() % 3) { case 0: a(0)=0.0; break; case 1: a(0)=+0.2; break; case 2: a(0)=-0.2; break; } signal_action.ExecAll(a); ++time; } }; // end of MRandomActionModule //-------------------------------------------------------------------------------------------
Then, use SKYAI_ADD_MODULE macro to register the module on SkyAI:
SKYAI_ADD_MODULE(MRandomActionModule)
Refer to ../Tutorial - Making Executable.
The main function for the mountain-car task is almost the same as that of the maze task. A difference is the name of the module type.
Here is an example:
int main(int argc, char**argv) { TOptionParser option(argc,argv); TAgent agent; if (!ParseCmdLineOption (agent, option)) return 0; MMountainCarTaskModule *p_mountaincar_task = dynamic_cast<MMountainCarTaskModule*>(agent.SearchModule("mountaincar_task")); if(p_mountaincar_task==NULL) {LERROR("module `mountaincar_task' is not defined as an instance of MMountainCarTaskModule"); return 1;} agent.SaveToFile (agent.GetDataFileName("before.agent"),"before-"); p_mountaincar_task->Start(); agent.SaveToFile (agent.GetDataFileName("after.agent"),"after-"); return 0; }
First, write a makefile which is almost the same as that of maze task; the difference is the executable's name. Then, execute the make command. An executable named mountain_car.out is generated?
Please refer to ../Tutorial - Writing Agent Script.
Now, let's test MMountainCarTaskModule using MRandomActionModule.
module MMountainCarTaskModule mountaincar_task module MRandomActionModule rand_action
connect mountaincar_task.signal_start_of_timestep , rand_action.slot_timestep connect rand_action.signal_action , mountaincar_task.slot_execute_action
mountaincar_task.config={ SleepUTime= 1000 }
That's it. Let's test!
Launch the executable as follows:
./mountain_car.out -agent random_act
You will see a mountain as follows where the car (#) moves randomly.
(-0.242451,0.875342), 0, 35.61/1 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ # ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
We use an NGnet to approximate the action value function. In order to use the NGnet, we need to follow the process:
The basis functions are allocated over the state space; they should cover the possible state.
In this section, we describe how to generate the basis functions using the generating tool.
The generating tools are stored in the tools/ngnet-generator directory; the executables may be already compiled. Otherwise, execute the make command at tools/ngnet-generator.
The basis functions of NGnet are generated as follows:
./gen-grid.out -out OUT_FILENAME -unit_grid DIV_VEC -xmin MIN_VEC -xmax MAX_VEC -invSigma INVSIGMA_VEC
Its options are (N: the dimensionality of state):
Of course, N is 2 in the mountain-car task. You can investigate the upper and the lower bound in the random action test. In this task, let us use 5x5 basis functions. Thus, we generate the basis functions of NGnet as follows:
../../tools/ngnet-generator/gen-grid.out -out ngnet_mc5x5.dat -unit_grid "5 5" -xmin "-1.2 -1.5" -xmax "0.6 1.5" -invSigma "auto"
where ../../ denotes the relative path to the SkyAI base directory.
The file ngnet_mc5x5.dat is generated, which is a text format; you can see the contents.
This figure illustrates the locations of the basis functions. Each ellipse shows the center of a Gaussian basis function and the contour of 1-standard deviation.
Please refer to ../Tutorial - Writing Agent Script.
Let's apply a Q-learning module to MMountainCarTaskModule.
include_once "ql_da"
module MMountainCarTaskModule mountaincar_task module MTDDiscAct behavior module MLCHolder_TRealVector direct_action module MDiscretizer action_discretizer module MBasisFunctionsNGnet ngnet
/// initialization process: connect mountaincar_task.signal_initialization , ngnet.slot_initialize connect ngnet.slot_initialize_finished , action_discretizer.slot_initialize connect action_discretizer.slot_initialize_finished , behavior.slot_initialize /// start of episode process: connect mountaincar_task.signal_start_of_episode , behavior.slot_start_episode /// start of time step process: connect mountaincar_task.signal_start_of_timestep , direct_action.slot_start_time_step /// end of time step process: connect mountaincar_task.signal_end_of_timestep , direct_action.slot_finish_time_step /// learning signals: connect behavior.signal_execute_action , action_discretizer.slot_in connect action_discretizer.signal_out , direct_action.slot_execute_action connect direct_action.signal_execute_command , mountaincar_task.slot_execute_action connect direct_action.signal_end_of_action , behavior.slot_finish_action connect mountaincar_task.signal_reward , behavior.slot_add_to_reward connect mountaincar_task.signal_finish_episode , behavior.slot_finish_episode_immediately /// I/O: connect action_discretizer.out_set_size , behavior.in_action_set_size connect mountaincar_task.out_state , ngnet.in_x connect ngnet.out_y , behavior.in_feature connect mountaincar_task.out_cont_time , behavior.in_cont_time
mountaincar_task.config={ SleepUTime= 1000 }
ngnet.config ={ NGnetFileName = "ngnet_mc5x5.dat" }
action_discretizer.config ={ Min = (-0.2, -0.2) Max = ( 0.2, 0.2) Division = (3, 3) } direct_action.config ={Interval = 0.2;}
behavior.config={ UsingEligibilityTrace = true UsingReplacingTrace = true Lambda = 0.9 GradientMax = 1.0e+100 ActionSelection = "asBoltzman" PolicyImprovement = "piExpReduction" Tau = 1 TauDecreasingFactor = 0.05 TraceMax = 1.0 Gamma = 0.9 Alpha = 0.3 AlphaDecreasingFactor = 0.002 AlphaMin = 0.05 }
Launch the executable as follows:
./mountain_car.out -path ../../benchmarks/cmn -agent ql -outdir result/rl1
where ../../benchmarks/cmn is a relative path of the benchmarks/cmn directory; modify it for your environment.
After several tens of episodes, the policy will converge to a path:
(-0.499914,0.00861355), 0.2, 0.01/200 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ # ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.450656,0.253621), 0.2, 0.35/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ # ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.317402,0.311478), 0.2, 0.78/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ #^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.62904,-0.879678), -0.2, 1.54/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^# ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.915373,-0.0505839), 0.2, 2.06/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^# ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.638749,1.06824), 0.2, 2.54/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^# ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(-0.162464,1.08153), 0.2, 2.95/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ #^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(0.149024,0.667015), 0.2, 3.31/0 ---------------------------------------- ^ G ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ #^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
(0.595877,0.685196), 0.2, 4.16/0 ---------------------------------------- ^ # ^ ^^^^^ ^ ^^^^^^^ ^ ^^^^^^^^ ^ ^^^^^^^^^^ ^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^ ^^^^^ ^^^^^^^^^^^^^^ ^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ----------------------------------------
In order to store the learning logs, make a directory result/rl1 which is specified with -outdir option. Plotting log-eps-ret.dat, you will obtain a learning curve: