Updated Home (markdown)

Gordon McCann 2021-10-12 10:41:11 -04:00
parent e7d19a0fc9
commit 709b238925

@ -19,3 +19,5 @@ Data from the CoMPASS DAQ system is typically given in a raw binary format for e
## Slow Sorting and Fast Sorting ## Slow Sorting and Fast Sorting
Once the data is properly ordered in time, the data must be sorted in to built events. The most general attempt at this is referred to as slow sorting by this code. Slow sorting is where all data that falls within a coincidence window is taken and placed into a single built event. This coincidence window is often referred to as the slow window. "Slow" comes from the fact that this is the largest window used by the program, so this sorting takes place over the largest time-scales, and is therefore slow. There are a few important things to note about slow sorting in the program. Foremost is that it does not have a master trigger. That is, data from any detector channel can start an event. This is essentially a requirement for using the time-shifts outlined above, as well as optimizing the slow window size. The window stays open until a hit with a timestamp that occurs outside of the slow window is found. Then, that hit starts the new built event and the previous event is flushed out to the next stage of the pipeline. Also, the slow sort algorithm does _not_ discard any data. The built event from slow sort is comprised of dynamically allocated arrays (read std::vector) thus meaning that in principle the slow sort incurs no intrinsic dead time other than from fragmentation of events. Once the data is properly ordered in time, the data must be sorted in to built events. The most general attempt at this is referred to as slow sorting by this code. Slow sorting is where all data that falls within a coincidence window is taken and placed into a single built event. This coincidence window is often referred to as the slow window. "Slow" comes from the fact that this is the largest window used by the program, so this sorting takes place over the largest time-scales, and is therefore slow. There are a few important things to note about slow sorting in the program. Foremost is that it does not have a master trigger. That is, data from any detector channel can start an event. This is essentially a requirement for using the time-shifts outlined above, as well as optimizing the slow window size. The window stays open until a hit with a timestamp that occurs outside of the slow window is found. Then, that hit starts the new built event and the previous event is flushed out to the next stage of the pipeline. Also, the slow sort algorithm does _not_ discard any data. The built event from slow sort is comprised of dynamically allocated arrays (read std::vector) thus meaning that in principle the slow sort incurs no intrinsic dead time other than from fragmentation of events.
Fast sorting is an optional secondary stage of coincidence analysis, aimed at resolving multi-hit events. In general, each coincidence event should contain at most a single hit from a given detector channel (there are cases where this is not true, however they are rare for the SESPS-SABRE setup), but if the slow window is significantly wider than the typical time correlation for two hits there is a possibility that two hits for a given channel may be put into a single built event. To resolve this hit degeneracy, the user may input additional time correlation information for specific channels. Fast refers to the fact that these windows must be shorter than the slow window. Specifically, for the SESPS-SABRE code, the fast sort provides the option to enforce a window on focal plane scintillator-anode data and then on focal plane scintillator-SABRE data, as the scintillator and SABRE tend to be much faster detectors than the focal plane ion chamber. These windows are referred to as the fast ion-chamber window and fast SABRE window respectively.