HDDM 0.8.0 documentation


Author:Thomas V. Wiecki, Imri Sofer, Mads L. Pedersen, Michael J. Frank
Contact:thomas.wiecki@gmail.com, imri_sofer@brown.edu, madslupe@gmail.com, michael_frank@brown.edu
Web site:http://ski.clps.brown.edu/hddm_docs
Mailing list:https://groups.google.com/group/hddm-users/
Copyright:This document has been placed in the public domain.
License:HDDM is released under the BSD 2 license.


HDDM is a python toolbox for hierarchical Bayesian parameter estimation of the Drift Diffusion Model (and now many other models!). Drift Diffusion Models (and related sequential sampling models) are used widely in psychology and cognitive neuroscience to study decision making.

Check out the tutorial on how to get started. Further information can be found below as well as in the howto section and the documentation.


  • Uses hierarchical Bayesian estimation (via PyMC) of DDM parameters to allow simultaneous estimation of subject and group parameters, where individual subjects are assumed to be drawn from a group distribution. HDDM should thus produce better estimates when less RT values are measured compared to other methods using maximum likelihood for individual subjects (i.e. DMAT or fast-dm).
  • Heavily optimized likelihood functions for speed (Navarro & Fuss, 2009).
  • Flexible creation of complex models tailored to specific hypotheses (e.g. estimation of separate drift-rates for different task conditions; or predicted changes in model parameters as a function of other indicators like brain activity).
  • Estimate trial-by-trial correlations between a brain measure (e.g. fMRI BOLD) and a diffusion model parameter using the HDDMRegression model.
  • Built-in Bayesian hypothesis testing and several convergence and goodness-of-fit diagnostics.
  • As of version 0.7.6 HDDM includes modules for analyzing reinforcement learning data with the reinforcement learning drift diffusion model (RLDDM) and a reinforcement learning (RL) model (Pedersen & Frank, 2020). See tutorial for the RLDDM and RL modules here and in the paper here.
  • NEW: HDDM extension to fitting arbitrary sequential sampling models beyond the DDM, using neural networks. Includes all the features of regular HDDM including regression etc, and more. For methods on how to use this extension, see tutorial here and in the paper here.

Comparison to other packages

A recent paper by Roger Ratcliff quantitatively compared DMAT, fast-dm, and EZ, and concluded: “We found that the hierarchical diffusion method [as implemented by HDDM] performed very well, and is the method of choice when the number of observations is small.”

Find the paper here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517692/


The following is a minimal python script to load data, run a model and examine its parameters and fit.

import hddm

# Load data from csv file into a NumPy structured array
data = hddm.load_csv('simple_difficulty.csv')

# Create a HDDM model multi object
model = hddm.HDDM(data, depends_on={'v':'difficulty'})

# Create model and start MCMC sampling
model.sample(2000, burn=20)

# Print fitted parameters and other model statistics

# Plot posterior distributions and theoretical RT distributions

For more information about the software and theories behind it, please see the main publication.


As of release 0.6.0, HDDM is compatible with Python 3 which we encourage.

The easiest way to install HDDM is through Anaconda (available for Windows, Linux and OSX):

  1. Download and install Anaconda.
  2. In a shell (Windows: Go to Start->Programs->Anaconda->Anaconda command prompt) type:
conda install -c pymc hddm

If you want to use pip instead of conda, type:

pip install pandas
pip install pymc
pip install kabuki
pip install hddm

This might require super-user rights via sudo. Note that this installation method is discouraged as it leads to all kinds of problems on various platforms.

To get access to the RLDDM and RL modules you will have to install via pip. Alternatively you can use docker to get access to the most recent version of HDDM by calling:

pull madslupe/hddm

This docker image will run HDDM in jupyter notebook.

If you are having installation problems please contact the mailing list.

And if you're a mac user, check out this thread for advice on installation.

How to cite

If HDDM was used in your research, please cite the publication:

Wiecki TV, Sofer I and Frank MJ (2013). HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Front. Neuroinform. 7:14. doi: 10.3389/fninf.2013.00014

If you've used any of the RL-modules (HDDMrl, HDDMrlRegressor or Hrl), please cite this paper:

Pedersen ML and Frank MJ (2020). Simultaneous hierarchical Bayesian parameter estimation for reinforcement learning and drift diffusion models: a tutorial and links to neural data. Computational Brain & Behavior. doi: 10.1007/s42113-020-00084-w

If you've used the HDDM extension to fitting other models using neural network based likelihood functions, please cite this paper:

Fengler A, Govindarajan LN, Chen T, and Frank MJ (2021). Likelihood Approximation Networks (LANs) for Fast Inference of Simulation Models in Cognitive Neuroscience. eLife 10:e65074.

Published papers using HDDM


James Rowe (Cambridge University): “The HDDM modelling gave insights into the effects of disease that were simply not visible from a traditional analysis of RT/Accuracy. It provides a clue as to why many disorders including PD and PSP can give the paradoxical combination of akinesia and impulsivity. Perhaps of broader interest, the hierarchical drift diffusion model turned out to be very robust. In separate work, we have found that the HDDM gave accurate estimates of decision parameters with many fewer than 100 trials, in contrast to the hundreds or even thousands one might use for ‘traditional’ DDMs. This meant it was realistic to study patients who do not tolerate long testing sessions.”

Getting started

Check out the tutorial on how to get started. Further information can be found in howto and the documentation.

Join our low-traffic mailing list.

Indices and tables