Updated FAQ (markdown)

Gordon McCann 2021-10-13 14:50:16 -04:00
parent e9cef675a5
commit 2c78f73941

4
FAQ.md

@ -5,3 +5,7 @@
* **The event builder has been running progressively more slowly, and the data doesn't make any sense, what do I do?** As stated before, there are rare cases (crashes, unexpected shutdown, etc.) where the `temp_binary` directory of the workspace will not be cleaned. The code analyzes every file found in `temp_binary`, so as one might expect the compounding data files can lead to undefined behavior. If this is not the case, check your window size, particularly the slow window. Too large or too small (extremes) can lead to strange behavior, as none of the data will be built together (case of too small) or the data will all be lumped into a few built events (case of too large). In general the slow window should be around 1.5 to 3 us, as it is pretty much entirely dominated by the delay lines.
* **I run the event builder and nothing happens, and there are no errors reported to the terminal. What is wrong?** This is most likely due to the event builder being unable to locate the raw binary archive you are trying to analyze. The event builder searches the `raw_binary` directory for a file with a name formmated as `run_#.tar.gz`, where `#` is the run number. Make sure that your file is actually in `raw_binary` and has the correct name format, and is a `.tar.gz` archive. Otherwise, make sure that the files in your run are CoMPASS `.bin` binary files with the correct file name format given by the CoMPASS software. Note that it has happened in the past where CoMPASS has slightly changed the file name format and this can sometimes lead to issues.
* **My data file is huge! How do I manage all of this data?** If your data file seems rather large, there are a few things to think about. First you can check the size of the outputed file against the size of the raw binary archive. If they are roughly the same size, it's just that each run has a very large amount of data. If you are actively running an experiment you may want to consider taking shorter runs so that the event-building for each run is faster. If they are not of comparable sizes, this means that the event builder is appending a bunch of extraneous data. Most commonly this occurs when the analysis option is used. If the size is untenable for the amount of storage available, several solutions exist. One can remove some of the data members from the analyzed structures, which reduces the size of each event written to disk. This is certainly possible for some experiments, such as FP only data sets where all of the SABRE data is extraneous. Alternatively, one can pipe the data to a larger external storage periodically.
* **ROOT throws a bunch of warnings about a dictionary when I open my output ROOT files. Is this an issue?** The answer is both yes and no. Our data is saved using a ROOT dictionary (see [ROOT's documentation](https://root.cern.ch/root/htmldoc/guides/users-guide/AddingaClass.html) on this for deep details), which allows us to save our custom data structures. This is very important for optimizing the data storage and workflow, but comes at the cost that we have to tell ROOT how to understand our data in some cases. If you are just looking to examine the file using something like a TBrowser, this is in general not an issue. Sometimes ROOT has strange behavior with `std::vector` objects, but generally one can still use the majority of the ROOT interpreter's functionality. However, if you want to use your own macro or compiled program, you will need to provide a copy of the dictionary to link against, as well as the appropriate header files to include. The event builder automatically generates a shared library for the dictionary for use in other programs and ROOT macros, and all of the data structures are found in the file `DataStructs.h`.