elephant.unitary_event_analysis module

Synopsis

Unitary Event (UE) analysis is a statistical method that analyzes excess spike correlation between simultaneously recorded neurons in a time resolved manner by comparing the empirical spike coincidences to the expected number based on the firing rates of the neurons see [1].

Background

It has been proposed that cortical neurons organize dynamically into functional groups (“cell assemblies”) by the temporal structure of their joint spiking activity. The Unitary Events analysis method detects conspicuous patterns of synchronous spike activity among simultaneously recorded single neurons. The statistical significance of a pattern is evaluated by comparing the empirical number of occurrences to the number expected given the firing rates of the neurons. Key elements of the method are the proper formulation of the null hypothesis and the derivation of the corresponding count distribution of synchronous spike events used in the significance test. The analysis is performed in a sliding window manner and yields a time-resolved measure of significant spike synchrony.

Examples

Tutorial on using Unitary Events

Run tutorial interactively:

https://mybinder.org/badge.svg

References

[1]Gruen S, Diesmann M, Grammont F, Riehle A, Aertsen A (1999) TITLE J Neurosci Methods, 94(1): 67-79.
[2]Gruen S, Diesmann M, Aertsen A (2002) TITLE Neural Comput, 14(1): 43-80.
[3]Gruen S, Diesmann M, Aertsen A (2002) TITLE Neural Comput, 14(1): 81-19.
[4]Gruen S, Riehle A, and Diesmann M (2003) Effect of cross-trial nonstationarity on joint-spike events. Biological Cybernetics 88(5):335-351
[5]Gruen S (2009) Data-driven significance estimation of precise spike correlation. J Neurophysiology 101:1126-1140 (invited review).

Author Contributions

  • Vahid Rostami (VH)
  • Sonja Gruen (SG)
  • Markus Diesmann (MD)

VH implemented the method, SG and MD provided input

Functions

elephant.unitary_event_analysis.gen_pval_anal(mat, N, pattern_hash, method='analytic_TrialByTrial', **kwargs)[source]

computes the expected coincidences and a function to calculate p-value for given empirical coincidences

this function generate a poisson distribution with the expected value calculated by mat. it returns a function which gets the empirical coincidences, n_emp, and calculates a p-value as the area under the poisson distribution from n_emp to infinity

elephant.unitary_event_analysis.hash_from_pattern(m, N, base=2)[source]

Calculate for a spike pattern or a matrix of spike patterns (provide each pattern as a column) composed of N neurons a unique number.

elephant.unitary_event_analysis.inverse_hash_from_pattern(h, N, base=2)[source]

Calculate the 0-1 spike patterns (matrix) from hash values

Examples

>>> import numpy as np
>>> h = np.array([3,7])
>>> N = 4
>>> inverse_hash_from_pattern(h,N)
    array([[1, 1],
           [1, 1],
           [0, 1],
           [0, 0]])
elephant.unitary_event_analysis.jointJ(p_val)[source]

Surprise measurement

logarithmic transformation of joint-p-value into surprise measure for better visualization as the highly significant events are indicated by very low joint-p-values

elephant.unitary_event_analysis.n_emp_mat(mat, N, pattern_hash, base=2)[source]

Count the occurrences of spike coincidence patterns in the given spike trains.

elephant.unitary_event_analysis.n_emp_mat_sum_trial(mat, N, pattern_hash)[source]

Calculates empirical number of observed patterns summed across trials

elephant.unitary_event_analysis.n_exp_mat(mat, N, pattern_hash, method='analytic', n_surr=1)[source]

Calculates the expected joint probability for each spike pattern

elephant.unitary_event_analysis.n_exp_mat_sum_trial(mat, N, pattern_hash, method='analytic_TrialByTrial', **kwargs)[source]

Calculates the expected joint probability for each spike pattern sum over trials

elephant.unitary_event_analysis.unitary_event_analysis(data, bin_size, window_size, window_step, pattern_hash, method='analytic_TrialByTrial', t_start=None, t_stop=None, binary=True, **kwargs)[source]

Performs the Unitary Event Analysis in a sliding window fashion.

#TODO: Describe the method

data: list of lists of SpikeTrain objects
Contains the spike data to be analyzed. data is constructed such that data[t][n] contains the SpikeTrain object that refers to neuron n in trial t. For any trial t and any neuron n, the SpikeTrain objects are assumed to share a common time axis.
bin_size: Quantity scalar (unit: time)
Size of bins for discretizing spike trains.
window_size: Quantity scalar (unit: time)
Size of the sliding analysis window.
window_step: Quantity scalar (unit: time)
Stepsize by the window is moved.
pattern_hash: list of integers
List of patterns to include in the analysis. Patterns are identified by their hash values as returned by the function hash_from_pattern (see also function inverse_hash_from_pattern).
method: string

Method to calculate the Unitary Events.

  • ‘analytic_TrialByTrial’: calculate the expectancy (analytically) on each trial, then sum over all trials.
  • ‘analytic_TrialAverage’: calculate the expectancy by averaging over trials (cf. Gruen et al. 2003).
  • ‘surrogate_TrialByTrial’: calculate the distribution of expected coincidences by spike time randomization in each trial and sum over trials.

Default: ‘analytic_trialByTrial’

t_start: None or Quantity scalar (unit time)

The start time of the analysis. #TODO position of window? If None, retrieved from the t_start attribute of the SpikeTrain object.

Default: None.

t_stop: None or Quantity scalar, optional

The stop time of the analysis. If None, retrieved from the t_stop attribute of the SpikeTrain object.

Default: None