# RGC_spiketrains_texture_motion_marmoset Ganglion unit spike train data recorded from marmoset retina. Focus on stimulation with random-walk-like motion of a spatial texture. # Folder structure ``` 20180710_YE_60MEA_Marmoset_eye1_42/ ├── frametimes/ ├── ks_sorted/ │   ├── spikes/ ├── stimuli/ ├── spike_sorting.ods └── spike_sorting_ks.ods ``` `spike_sorting.ods` contains the metadata and the sorting information obtained with IGOR, which is irrelevant here. `spike_sorting_ks.ods` contains only the metadata and is there for backward compatibility purposes. ### frametimes Synchronization of stimulus and recorded data. There is a delay between a pulse being recorded and the screen actually updating. This is the monitor_delay cell in the `spike_sorting_ks.ods` file. This value is already added when saving `.npz` files. They already contain the correction, in contrast to the `.mat` files that most people use in the lab, which do not have it. The npz files contain two arrays, f_on and f_off, respectively for the onsets and the offsets of the square-wave pulse. For more information, check the code `pymer/modules/analysis_scripts.py:extractframetimes` ### ks_sorted `spike_times.npz`: Spike times of all units for all stimuli. `spike_clusters.npy`: Identities of the spiking units, corresponding to `spike_times.npz` `cluster_info.tsv`: Contains all information about the units like channel number, assigned quality and group (good, noise) Since all stimuli are concatenated for sorting, the individual stimuli are not separated in these files. In order to make it easier to analyze each stimulus, the **spikes** folder contains npz files for each stimulus; in the npz files each unit's spike times for that stimulus are stored. Units that were classified as noise are not included when saving to the spikes folder. ```python import numpy as np x = np.load("1.npz", allow_pickle=True) spikes = x['spikes'] # Note that this is an object array, since length of each "line" is different. # Each "line" of this element corresponds to the spike times of a single unit. ``` ```python >>> spikes[0] # Spike times of the first unit array([ 2.19272, 3.62512, 3.67528, ..., 314.62344, 314.96828, 314.99288]) ``` **Spikes** folder also contains `clusters.tsv`, which is a simplified version of the `cluster_info.tsv` file, with the noise units and unneeded columns removed. It also contains channel numbers and cluster numbers, for backwards compatibility with IGOR-style analyses. ### stimuli Contains the parameters that were used in the experiment, one file per stimulus. ## Loading and working with the texture stimulus The texture stimulus is referred to as OMB (objects moving background) since this was the original name of the stimulus. To use the functions already present in the pymer package, it needs to be first set up. Follow the instructions [here](https://github.com/ycanerol/pymer). In order to start working with the OMB stimulus, an instance of the OMB object needs to be instantiated. ```python from omb import OMB st = OMB('20180710', 8) ``` This object contains the basic information that might be needed to analyze the texture stimulus. ```python >>> spikes = st.allspikes() # Load binned spikes for all units into the variable spikes >>> spikes.shape (96, 54000) # 96 units, 54000 time bins >>> st.texturebasic.shape # The texture that is moved around to generate the stimulus (200, 200) ``` ### Documentation You can see how all the attributes and functions are defined in `classes/omb.py` and `classes/stimulus.py` in pymer. `OMB` is child class of the more general `Stimulus`, so it contains everything in `Stimulus` plus those that are defined in `omb.py` ### Interactive demo To explore the functionality of the pymer OMB class, you can run the `pymer_omb_examples.ipynb` notebook. For this jupyter notebook should be installed, you can do so by running ```bash conda install jupyter ``` **Note**: jupyter can be installed to your base environment (as opposed to pymer environment), this would make it easier to access jupyter with other environments you might have. Start a notebook server by running ```bash jupyter notebook ```