#64 Adding AlphaOmega mpx and lsx file version 4

Merged
sprenger merged 10 commits from NeuralEnsemble/alphaomegampx into NeuralEnsemble/master 2 år sedan

Third try, let's hope it's the last… Sorry again for the mess

Duplicate of #62 to create the PR on main repo

These files were generated using a AlphaRS hardware using the software provided with the hardware.

The two lsx files are index listing mpx files of one session (a new lsx file is created everytime the software is restarted).

The mpx file contains the metadata and data of analog/digital input/output channels. One session can create several mpx files which can be linked (the same recording split into several files) or not (several recording start/stop in the same session).

The first listing contains two separate recordings and the second one only one.

The pull-request for python-neo code side can be found here

Third try, let's hope it's the last… Sorry again for the mess Duplicate of #62 to create the PR on main repo These files were generated using a AlphaRS hardware using the software provided with the hardware. The two lsx files are index listing mpx files of one session (a new lsx file is created everytime the software is restarted). The mpx file contains the metadata and data of analog/digital input/output channels. One session can create several mpx files which can be linked (the same recording split into several files) or not (several recording start/stop in the same session). The first listing contains two separate recordings and the second one only one. The pull-request for python-neo code side can be found [here](https://github.com/NeuralEnsemble/python-neo/pull/1049)
sprenger kommenterad 2 år sedan
Ägare

Hi @tperret, this time the content of the files is present and the PR is functional :)

I checked out the files and noticed that two of them are very large (F211115-0001.mpx: 32MB; mapfile0008.mpx: 25MB). For test files that are downloaded very frequently this is still very large. Is there a way these files can be reduced in file size? E.g. duplicate the files, but only include the first few samples of each recording channel?

Also could you please add a 1-sentence description + attribution of the dataset in a README file? As an example for this you can e.g. have a look here: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/src/master/neuralynx/Cheetah_v1.1.0/README.txt

Hi @tperret, this time the content of the files is present and the PR is functional :) I checked out the files and noticed that two of them are very large (`F211115-0001.mpx`: 32MB; `mapfile0008.mpx`: 25MB). For test files that are downloaded very frequently this is still very large. Is there a way these files can be reduced in file size? E.g. duplicate the files, but only include the first few samples of each recording channel? Also could you please add a 1-sentence description + attribution of the dataset in a README file? As an example for this you can e.g. have a look here: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/src/master/neuralynx/Cheetah_v1.1.0/README.txt
Thomas Perret kommenterad 2 år sedan
Deltagare

@sprenger Yes, I know the files are pretty big, I wanted to be able to test the main loading cases (several neo-segments and multiple blocks). Sadly, I can't really cut the files in a proper way without creating my own AlphaOmega format (the original format contains data that is not parsable). I could generate a new dataset with only one or two channels that will cut by 8 the total size (There are 16 channels recorded in the current dataset), would that be better? I can also make shorter recordings (the total time is ~50s in the current dataset).

I'll add the dataset description once I made the new recordings

@sprenger Yes, I know the files are pretty big, I wanted to be able to test the main loading cases (several neo-segments and multiple blocks). Sadly, I can't really cut the files in a proper way without creating my own AlphaOmega format (the original format contains data that is not parsable). I could generate a new dataset with only one or two channels that will cut by 8 the total size (There are 16 channels recorded in the current dataset), would that be better? I can also make shorter recordings (the total time is ~50s in the current dataset). I'll add the dataset description once I made the new recordings
sprenger kommenterad 2 år sedan
Ägare

So the different mpx files do not correspond to individual channels? It seems that only the first F211115-000{1..8}.mpx is bigger than the others. Is this intentional?

Reducing the time is always a good idea. For testing if a specific format can be read properly typically a few seconds should be sufficient.

So the different mpx files do not correspond to individual channels? It seems that only the first `F211115-000{1..8}.mpx` is bigger than the others. Is this intentional? Reducing the time is always a good idea. For testing if a specific format can be read properly typically a few seconds should be sufficient.
Thomas Perret kommenterad 2 år sedan
Deltagare

No the different mpx files record all channels (analog, digital, input/output) in a (looks-like random) interleaved structure. So different files record different periods.

In fact in this dataset, the first F211115-0001.mpx file is a complete record (that's why it's big) and the files F211115-000[2-8].mpx are another one split into smaller files. I did that to check that the neo implementation I wrote can merge data from split records into one segment.

To sum-up, there are 3 segments of recordings gathered into two blocks: one for the first i211115-0001.lsx file (two segments) and the second for the iapfile0008.lsx file (one segment)

I'll describe better the data with the shorter dataset version

No the different mpx files record all channels (analog, digital, input/output) in a (looks-like random) interleaved structure. So different files record different periods. In fact in this dataset, the first F211115-0001.mpx file is a complete record (that's why it's big) and the files F211115-000[2-8].mpx are another one split into smaller files. I did that to check that the neo implementation I wrote can merge data from split records into one segment. To sum-up, there are 3 segments of recordings gathered into two blocks: one for the first i211115-0001.lsx file (two segments) and the second for the iapfile0008.lsx file (one segment) I'll describe better the data with the shorter dataset version
Thomas Perret kommenterad 2 år sedan
Deltagare

I hope I didn't mess up too much the repository because I uploaded few different datasets. I tried to remove the old ones but I'm not a git-annex guru and I think I did one or two mistakes…

I generated about 8MB of data for about 8s recording (there are a lot of metadata and/or unused channel that are difficult to remove from the recording). @sprenger: is this small enough?

I also added the required README: is the description clear enough?

I hope I didn't mess up too much the repository because I uploaded few different datasets. I tried to remove the old ones but I'm not a git-annex guru and I think I did one or two mistakes… I generated about 8MB of data for about 8s recording (there are a lot of metadata and/or unused channel that are difficult to remove from the recording). @sprenger: is this small enough? I also added the required README: is the description clear enough?
sprenger kommenterad 2 år sedan
Ägare

Yes, now everything looks good. Thanks for adjusting the dataset and the README! I will merge this now.

Yes, now everything looks good. Thanks for adjusting the dataset and the README! I will merge this now.
This pull request has been merged successfully!
Logga in för att delta i denna konversation.
Ingen Etikett
Ingen Milsten
Ingen förvärvare
2 Deltagare
Laddar...
Avbryt
Spara
Det finns inget innehåll än.