mirror of
https://github.com/sesps/SPS_SABRE_EventBuilder.git
synced 2024-11-22 18:18:52 -05:00
Updated Home (markdown)
parent
b40ae3cb51
commit
df027a6e5c
19
Home.md
19
Home.md
|
@ -26,6 +26,9 @@ Fast sorting is an optional secondary stage of coincidence analysis, aimed at re
|
||||||
|
|
||||||
Technically speaking, after the sorting stages, the event building is complete and the next, much more complicated and experiment specific data analysis stage should take over. However, due to some limitations of the CoMPASS software with online data analysis, as well as to provide a method to test the success of event builder, the event builder can pass the data on to a very basic analysis class. This analysis is _not_ meant to be used as a final analysis program; it does not have very many safety measures and is typically too simple and too difficult to modify for most experiments. There are some key features that outline both the use and drawbacks of this analysis. First is that _any_ degeneracy in data must be resolved for use with the analysis. Consider the following scenario: the front left delay line signal is slightly noisy and inside the built event there are two front left delay line hits. To calculate a calibrated focal plane position one must subtract the timestamp of the left and right signal for a given delay line. How then is the analysis to select which front left delay signal goes with the single front right delay signal? To make the code generally applicable it employs a very simple solution of taking whichever hit occurred first, but one can imagine all of the reasons why this is not desirable for specific experiment cases. This first-in selection scheme is employed for _every_ detection channel that gets converted into analyzed data. In general, the only data member that continues to maintain the earlier policy of not dumping data is the SABRE array, however, a downscaled version of the SABRE array is then required to be used with the online plotting. Additionally, due to the dynamic nature of the sorted data, checks must be made upon the validity of the data at analysis time. This in turn induces a performance penalty as more and more complicated analyses are performed, which reduces the usefulness of adding more analysis steps. Finally, the data analysis tends to bloat the file size. Each additional analyzed parameter induces an increase in the written data size, and ROOT does not support optional writing. That is: even if an event does not have a right scintillator hit, all of the right scintillator data member will still be written to the file with an illegal default value (typically something like -1). In a more specialized analysis, only relevant data would be written, but due to the general nature of this analysis along with its focus on providing sanity checks for the event building process, a lot of experimentally irrelevant data will be written.
|
Technically speaking, after the sorting stages, the event building is complete and the next, much more complicated and experiment specific data analysis stage should take over. However, due to some limitations of the CoMPASS software with online data analysis, as well as to provide a method to test the success of event builder, the event builder can pass the data on to a very basic analysis class. This analysis is _not_ meant to be used as a final analysis program; it does not have very many safety measures and is typically too simple and too difficult to modify for most experiments. There are some key features that outline both the use and drawbacks of this analysis. First is that _any_ degeneracy in data must be resolved for use with the analysis. Consider the following scenario: the front left delay line signal is slightly noisy and inside the built event there are two front left delay line hits. To calculate a calibrated focal plane position one must subtract the timestamp of the left and right signal for a given delay line. How then is the analysis to select which front left delay signal goes with the single front right delay signal? To make the code generally applicable it employs a very simple solution of taking whichever hit occurred first, but one can imagine all of the reasons why this is not desirable for specific experiment cases. This first-in selection scheme is employed for _every_ detection channel that gets converted into analyzed data. In general, the only data member that continues to maintain the earlier policy of not dumping data is the SABRE array, however, a downscaled version of the SABRE array is then required to be used with the online plotting. Additionally, due to the dynamic nature of the sorted data, checks must be made upon the validity of the data at analysis time. This in turn induces a performance penalty as more and more complicated analyses are performed, which reduces the usefulness of adding more analysis steps. Finally, the data analysis tends to bloat the file size. Each additional analyzed parameter induces an increase in the written data size, and ROOT does not support optional writing. That is: even if an event does not have a right scintillator hit, all of the right scintillator data member will still be written to the file with an illegal default value (typically something like -1). In a more specialized analysis, only relevant data would be written, but due to the general nature of this analysis along with its focus on providing sanity checks for the event building process, a lot of experimentally irrelevant data will be written.
|
||||||
|
|
||||||
|
## Plotting
|
||||||
|
The main purpose of running the analysis is to then generate histograms which help indicate the quality of the data. To this end, the event builder has a plotting routine that uses ROOT tools to generate a file of histograms. This plotting tool can take in multiple analyzed files and generate a single plot file containing all of the data. Additionally, a list of cuts (called the CutList) can be given to the plotter tool, and they will be applied to the data. Cuts should be given in a cut list file, and should each be individually saved in ROOT files as TCutG objects named "CUTG". The cutlist file will ask the user to give a new name for the cut, as well as specify the name of the variables upon which the cut should be applied. There are a fixed number of keywords for variables which can be used, see the `CutHandler` class for more details.
|
||||||
|
|
||||||
# Installation, Building, and Setting up the Workspace
|
# Installation, Building, and Setting up the Workspace
|
||||||
|
|
||||||
First, the only external dependence for this repository is the ROOT Data Analysis Package. Due to the large size and complexity of ROOT, ROOT is not included as a submodule, and rather the user is relied upon to properly install and setup their own ROOT package. This code has been primarily tested and validated using ROOT6, so mileage may vary with older versions.
|
First, the only external dependence for this repository is the ROOT Data Analysis Package. Due to the large size and complexity of ROOT, ROOT is not included as a submodule, and rather the user is relied upon to properly install and setup their own ROOT package. This code has been primarily tested and validated using ROOT6, so mileage may vary with older versions.
|
||||||
|
@ -35,3 +38,19 @@ Installation and building the code is fairly straightforward. After obtaining th
|
||||||
Also included in the `bin` directory is a bash script called `archivist`. This script is for use at the FSU online DAQ, and is mostly irrelevant in other use cases.
|
Also included in the `bin` directory is a bash script called `archivist`. This script is for use at the FSU online DAQ, and is mostly irrelevant in other use cases.
|
||||||
|
|
||||||
Finally, to run the code the user must setup the proper directory environment referred to as the workspace. An example of what the workspace environment should contain is shown in the `example` directory of the repository. `raw_binary` should contain raw binary archives (read: .tar.gz) of CoMPASS runs that follow the format `run_#.tar.gz`. The code unpacks the archive to the `temp_binary` directory, reads the data, and saves an event built file to the proper directory based on the type of analysis the user requests. The `temp_binary` directory is then cleaned so that it can be used for the next run. Note that in general it is best to have the workspace be located somewhere other than the repository usually with a head directory name that indicates what experiment the included data is associated with.
|
Finally, to run the code the user must setup the proper directory environment referred to as the workspace. An example of what the workspace environment should contain is shown in the `example` directory of the repository. `raw_binary` should contain raw binary archives (read: .tar.gz) of CoMPASS runs that follow the format `run_#.tar.gz`. The code unpacks the archive to the `temp_binary` directory, reads the data, and saves an event built file to the proper directory based on the type of analysis the user requests. The `temp_binary` directory is then cleaned so that it can be used for the next run. Note that in general it is best to have the workspace be located somewhere other than the repository usually with a head directory name that indicates what experiment the included data is associated with.
|
||||||
|
|
||||||
|
# Running the Code
|
||||||
|
|
||||||
|
The basics of running the code are mostly contained within the input file, of which there is an example in the repository called `input.txt`. The input file asks for the location of a workspace, a channel map file which gives a list of the digitizer channels and the associated detector information, a board offset file which lists the time-shifts to be applied to specific channels, a scaler file which lists any digitizer channels to be taken as scalers, and a cut list file which lists any cuts to be used with the plotter tool. Examples of these files may be found in the `etc` directory. The SESPS-SABRE event builder will also ask for reaction information so that a kinematic correction can be applied to analyzed focal plane data. This includes specifying atomic numbers for use in looking up nuclear masses. Note that the code uses the 2017 AMDC mass evaluation data; if you input information which requests a nuclear mass not included in that data, an error will occur in the code. Finally, the input file asks for window sizes as well as a range of run numbers over which to the program will be run.
|
||||||
|
|
||||||
|
If the commandline executable (`GWMEVB_CL`) is being used, format is the following:
|
||||||
|
* `./bin/GWMEVB_CL <evb_operation> <your_input_file>`
|
||||||
|
|
||||||
|
Where `<evb_operation>` should be a keyword which indicates to the program which type of event building operation (Slow sort, fast sort, slow sort with analysis, etc.) should be used. The file `main.cpp` located in the `src` directory contains the list of keywords.
|
||||||
|
|
||||||
|
If the gui executable (`GWMEVB`) is being used, format is simply:
|
||||||
|
* `./bin/GWMEVB`
|
||||||
|
|
||||||
|
The input file can then be loaded using the `File->Load` menu, or the user can manually enter the input parameters to the GUI. The GUI also provides functionality for saving an input file for the currently set parameters using the `File->Save` menu. The event building operation is then selected using the drop down menu.
|
||||||
|
|
||||||
|
Note that both executables should be run from the top-level repository directory, _not_ from the `bin` directory. This is standard practice to define the path to specific external data files, namely the workspace and nuclear mass datafile. If the program is run from the `bin` directory the behavior of much of the program is undefined.
|
Loading…
Reference in New Issue
Block a user