mirror of
https://github.com/sesps/SPS_SABRE_EventBuilder.git
synced 2024-11-23 02:28:50 -05:00
Updated Home (markdown)
parent
709b238925
commit
b40ae3cb51
16
Home.md
16
Home.md
|
@ -20,4 +20,18 @@ Data from the CoMPASS DAQ system is typically given in a raw binary format for e
|
||||||
|
|
||||||
Once the data is properly ordered in time, the data must be sorted in to built events. The most general attempt at this is referred to as slow sorting by this code. Slow sorting is where all data that falls within a coincidence window is taken and placed into a single built event. This coincidence window is often referred to as the slow window. "Slow" comes from the fact that this is the largest window used by the program, so this sorting takes place over the largest time-scales, and is therefore slow. There are a few important things to note about slow sorting in the program. Foremost is that it does not have a master trigger. That is, data from any detector channel can start an event. This is essentially a requirement for using the time-shifts outlined above, as well as optimizing the slow window size. The window stays open until a hit with a timestamp that occurs outside of the slow window is found. Then, that hit starts the new built event and the previous event is flushed out to the next stage of the pipeline. Also, the slow sort algorithm does _not_ discard any data. The built event from slow sort is comprised of dynamically allocated arrays (read std::vector) thus meaning that in principle the slow sort incurs no intrinsic dead time other than from fragmentation of events.
|
Once the data is properly ordered in time, the data must be sorted in to built events. The most general attempt at this is referred to as slow sorting by this code. Slow sorting is where all data that falls within a coincidence window is taken and placed into a single built event. This coincidence window is often referred to as the slow window. "Slow" comes from the fact that this is the largest window used by the program, so this sorting takes place over the largest time-scales, and is therefore slow. There are a few important things to note about slow sorting in the program. Foremost is that it does not have a master trigger. That is, data from any detector channel can start an event. This is essentially a requirement for using the time-shifts outlined above, as well as optimizing the slow window size. The window stays open until a hit with a timestamp that occurs outside of the slow window is found. Then, that hit starts the new built event and the previous event is flushed out to the next stage of the pipeline. Also, the slow sort algorithm does _not_ discard any data. The built event from slow sort is comprised of dynamically allocated arrays (read std::vector) thus meaning that in principle the slow sort incurs no intrinsic dead time other than from fragmentation of events.
|
||||||
|
|
||||||
Fast sorting is an optional secondary stage of coincidence analysis, aimed at resolving multi-hit events. In general, each coincidence event should contain at most a single hit from a given detector channel (there are cases where this is not true, however they are rare for the SESPS-SABRE setup), but if the slow window is significantly wider than the typical time correlation for two hits there is a possibility that two hits for a given channel may be put into a single built event. To resolve this hit degeneracy, the user may input additional time correlation information for specific channels. Fast refers to the fact that these windows must be shorter than the slow window. Specifically, for the SESPS-SABRE code, the fast sort provides the option to enforce a window on focal plane scintillator-anode data and then on focal plane scintillator-SABRE data, as the scintillator and SABRE tend to be much faster detectors than the focal plane ion chamber. These windows are referred to as the fast ion-chamber window and fast SABRE window respectively.
|
Fast sorting is an optional secondary stage of coincidence analysis, aimed at resolving multi-hit events. In general, each coincidence event should contain at most a single hit from a given detector channel (there are cases where this is not true, however they are rare for the SESPS-SABRE setup), but if the slow window is significantly wider than the typical time correlation for two hits there is a possibility that two hits for a given channel may be put into a single built event. To resolve this hit degeneracy, the user may input additional time correlation information for specific channels. Fast refers to the fact that these windows must be shorter than the slow window. Specifically, for the SESPS-SABRE code, the fast sort provides the option to enforce a window on focal plane scintillator-anode data and then on focal plane scintillator-SABRE data, as the scintillator and SABRE tend to be much faster detectors than the focal plane ion chamber. These windows are referred to as the fast ion-chamber window and fast SABRE window respectively. It is important to note that fast sorting is _optional_ and may need tweaking on an experiment by experiment basis, and is not recommended to be run until the time-shifts and the slow sorting method have been tested and run successfully. Additionally, it should be emphasized that the default fast sorting stage _can_ dump data. It requires that an ion-chamber anode hit be present in order to be saved; in general this means that scintillator-only events (scintillator singles) or SABRE singles will be dumped if the fast sorting is done.
|
||||||
|
|
||||||
|
## Basic Analysis
|
||||||
|
|
||||||
|
Technically speaking, after the sorting stages, the event building is complete and the next, much more complicated and experiment specific data analysis stage should take over. However, due to some limitations of the CoMPASS software with online data analysis, as well as to provide a method to test the success of event builder, the event builder can pass the data on to a very basic analysis class. This analysis is _not_ meant to be used as a final analysis program; it does not have very many safety measures and is typically too simple and too difficult to modify for most experiments. There are some key features that outline both the use and drawbacks of this analysis. First is that _any_ degeneracy in data must be resolved for use with the analysis. Consider the following scenario: the front left delay line signal is slightly noisy and inside the built event there are two front left delay line hits. To calculate a calibrated focal plane position one must subtract the timestamp of the left and right signal for a given delay line. How then is the analysis to select which front left delay signal goes with the single front right delay signal? To make the code generally applicable it employs a very simple solution of taking whichever hit occurred first, but one can imagine all of the reasons why this is not desirable for specific experiment cases. This first-in selection scheme is employed for _every_ detection channel that gets converted into analyzed data. In general, the only data member that continues to maintain the earlier policy of not dumping data is the SABRE array, however, a downscaled version of the SABRE array is then required to be used with the online plotting. Additionally, due to the dynamic nature of the sorted data, checks must be made upon the validity of the data at analysis time. This in turn induces a performance penalty as more and more complicated analyses are performed, which reduces the usefulness of adding more analysis steps. Finally, the data analysis tends to bloat the file size. Each additional analyzed parameter induces an increase in the written data size, and ROOT does not support optional writing. That is: even if an event does not have a right scintillator hit, all of the right scintillator data member will still be written to the file with an illegal default value (typically something like -1). In a more specialized analysis, only relevant data would be written, but due to the general nature of this analysis along with its focus on providing sanity checks for the event building process, a lot of experimentally irrelevant data will be written.
|
||||||
|
|
||||||
|
# Installation, Building, and Setting up the Workspace
|
||||||
|
|
||||||
|
First, the only external dependence for this repository is the ROOT Data Analysis Package. Due to the large size and complexity of ROOT, ROOT is not included as a submodule, and rather the user is relied upon to properly install and setup their own ROOT package. This code has been primarily tested and validated using ROOT6, so mileage may vary with older versions.
|
||||||
|
|
||||||
|
Installation and building the code is fairly straightforward. After obtaining the repository from GitHub, the code can be compiled and linked using GNU make and the included makefile. This will build two executables in the `bin` directory of the repository. They are currently called `GWMEVB` and `GWMEVB_CL`. The only difference between these two, is that the `_CL` version is a pure commandline application while the other has a GUI built in the ROOT environment. Currently the build only supports Mac and Linux operating systems, however there are plans to change toward a more complete build system, `premake`. It should also be noted that for the linking of ROOT libraries the makefile relies upon the `root-config` tool that comes with the standard ROOT install.
|
||||||
|
|
||||||
|
Also included in the `bin` directory is a bash script called `archivist`. This script is for use at the FSU online DAQ, and is mostly irrelevant in other use cases.
|
||||||
|
|
||||||
|
Finally, to run the code the user must setup the proper directory environment referred to as the workspace. An example of what the workspace environment should contain is shown in the `example` directory of the repository. `raw_binary` should contain raw binary archives (read: .tar.gz) of CoMPASS runs that follow the format `run_#.tar.gz`. The code unpacks the archive to the `temp_binary` directory, reads the data, and saves an event built file to the proper directory based on the type of analysis the user requests. The `temp_binary` directory is then cleaned so that it can be used for the next run. Note that in general it is best to have the workspace be located somewhere other than the repository usually with a head directory name that indicates what experiment the included data is associated with.
|
Loading…
Reference in New Issue
Block a user