On Mon, Sep 12, 2016 at 1:32 PM, Chris Gorgolewski < [email protected]> wrote:
> It's time for a Q&A! > > Can you explain what steps you’ve taken to adapt the pipelines to work >> with the BIDS format? >> > The pipelines are not modified - BIDS App is calling exactly the same high > level functions as the scripts in https://github.com/Washington- > University/Pipelines/tree/master/Examples are calling. All of the of work > of adapting to BIDS is figuring out which raw files go where and which > parameters to set. All of this code is contained in this script: > https://github.com/BIDS-Apps/HCPPipelines/blob/master/run.py > > >> Conversely, have you taken any steps to adapt the existing HCP packages >> to work with these containers? >> > If by "HCP packages" you mean datasets than yes. As mentioned in the > readme "To convert DICOMs from your HCP-Style (CMRR) acquisitions to BIDS > try using heudiconv <https://github.com/nipy/heudiconv> with this heuristic > file > <https://github.com/nipy/heudiconv/blob/master/heuristics/cmrr_heuristic.py>.". > We are also working on a simple HCP2BIDS tool (I'll posted it here when > it's ready). > regarding packages, i'll take the software interpretation of package question. if that is the case, the docker container runs the hcp pipeline scripts as distributed from the project using the specific versions of dependencies listed in the documentation for the scripts. so they are running the "hcp versions" of the scripts. just a quick aside on the heudiconv script - this would need to be adapted for each connectome project since the same data are not being acquired at every site, but gives the general flavor of how you can get bids output from an acquisition done with the CMRR sequences. cheers, satra _______________________________________________ HCP-Users mailing list [email protected] http://lists.humanconnectome.org/mailman/listinfo/hcp-users
