new calibrations


Subject: new calibrations
From: Bob Nolty (nolty@hep213.cithep.caltech.edu)
Date: Fri Jun 16 2000 - 17:13:28 EDT


Hi all --

I have done extensive investigations of the reprocessed data and the
reprocessed calibrations for one week worth of data, calibration set
8534 (runs 8534-8578). I have collected a series of PAW histograms
together in

http://www.cithep.caltech.edu/macro/protected/notes/cal2000.tar

(1.5 MB) and

http://www.cithep.caltech.edu/macro/protected/notes/cal2000.tar.gz

(0.4 MB) This note assumes you have printed out the histograms and are
looking at them as you read.
------------------------------------------------------------------------

SCATTER-OLD-OLD.PS

The previously existing situation, using my old dataset and the old
standard calibrations. This scatter plot shows, for non-interERP
events only (no N or S face), for all the events that pass all my cuts
for reconstructing beta, the quantity 'tError'. tError is the
difference between the time a box would be expected to record for a
beta=1 particle minus the time it actually recorded. The x-axis is
box number, the y-axis is tError in nanoseconds. Each event is
represented by 2 dots. For example, if an event went from box 17 to
box 1, and the TOF was 2 ns longer than expected for beta=1, then box
1 would have an entry at -2 ns, and box 17 would have an entry at +2
ns. No microcuts have been applied.

This week was one of about 4 weeks in which SM2 had a fairly large
number of outliers in tError. Also, as was the case for the first few
years of attico running, boxes 478 and 479 (SM5) had a wide spread in
tError.

Based on this plot, my old microcuts were to eliminate any events
involving boxes 478 and 479, due to the wide spread in tError, and to
eliminate any event involving SM2 at all, due to the large number of
outliers.

------------------------------------------------------------------------

SCATTER-NEW-NEW.PS

The same thing, for the newly-available reprocessed DSTs and the
newly-available calibrations. Qualitatively, it is little-changed
except for SM5. Here, the new calibration code gave up on calibrating
boxes 478 and 479, so they have very bad offset constants. This is
also the cause of the errant dots at negative tError in the boxes that
are paired with 478 and 479 in events.

------------------------------------------------------------------------

IERP-OLD-OLD.PS

For old data and old calibrations, the distribution of tError for
interERP events, broken out by interERP pair (0 = N/1, 1 = 1/2, ...,
6 = 6/S). Based on this plot, my old microcuts excluded N/1 because
the RMS was large; excluded 1/2 because the mean was not near zero;
and in fact my conservative cut excluded 3/4 because the RMS was just
over my cut of 1.5 ns.

------------------------------------------------------------------------

IERP-NEW-NEW.PS

Same thing for new data and new calibrations. The dramatic
improvement in N/1 and 1/2 I believe must be due to a better
determination of the interERP TDC slope.

------------------------------------------------------------------------

SCATTER-OLD-NEW.PS
IERP-OLD-NEW.PS
SCATTER-NEW-OLD.PS
IERP-NEW-OLD.PS

These plots, which use old data with new calibrations, and new data
with old calibrations, respectively, show that most of the difference
is due to the new calibrations, not due to the new data.

------------------------------------------------------------------------

MEANS-SIGMAS.PS

For every box, this plot histograms the mean and the sigma of the
tError distribution (for non-interERP events only). Qualitatively
there is little difference. Quantitatively the new data with new
calibrations appears slightly to be preferred, as it gives means
closer to zero and with smaller variances.

------------------------------------------------------------------------

DIFFERENCE.PS

For a given box, say 1B01, did its tError distribution get better or
worse? This graph shows, for all 456 boxes that were active, the
difference between the mean of the tError distribution under old data
and old calibrations, and the mean with new data and new
calibrations. It is seen that only slightly more than half of the
boxes got slightly better, and slightly less than half got slightly
worse.

------------------------------------------------------------------------

BETA.PS

This figure shows that the 1/beta distributions are unacceptable in the
absence of microcuts, for either set of calibrations. The new
calibrations are worse because of the events involving box 478 or 479,
which were not calibrated in the new calibrations.

------------------------------------------------------------------------

UCUTS.PS

The top plot shows my old analysis, with old data, old calibrations,
and my previously-determined microcuts. Note that there is one signal
event and one negative beta "background" event that underflowed the
histogram.

The middle plot uses new data and new calibrations, and as microcuts
cuts only those boxes flagged in "8534.MASK" as being bad for TDC
calibrations. This list of bad TDC boxes was produced at GS during
the recalibration procedure. Actually, it contains just two live normal
boxes (478 and 479), four dead boxes (44, 162, 174, 381), and five N/S
(interERP-only) boxes (49, 51, 52, 547, 550). However, even with
these cuts in place there is a lot of garbage in the beta
distribution; this is due to the erratic performance of SM2. The
bottom plot is identical to the middle, except all events in SM2 are
excluded.

Actually, I am not able right now to implement interERP microcuts as
suggested by the 8534.MASK file. My microcuts look at the interERP
tError distributions, summed over all boxes. 8534.MASK suggests to
exclude certain boxes from interERP events, rather than entire pairs.

Here is a summary of the negative beta events:

Run Event Boxes OldBeta NewBeta Comment

8551 10267 1B01 1N07 -7.1 -1.3 OLD: IERP ucut NEW: signal?

8553 9456 6B14 6S05 -0.3 +0.8 OLD: background NEW: IERP ucut

8556 1763 1B01 1N07 -7.2 -2.4 OLD: IERP ucut NEW: background

8561 4523 1B03 1N07 -1.3 -0.8 OLD: IERP ucut NEW: signal?

8567 11089 1B15 3T01 +13.8 -2.0 OLD: +beta bg NEW: background

8568 10190 5C05 5T03 -1.1 -1.1 OLD: signal NEW: signal

------------------------------------------------------------------------

Summary and Discussion

I inadvertently picked an odd week to do my comparisons, because of
the erratic behavior of SM2.

In general, the tError distributions are very slightly improved by the
new calibrations.

Two of the interERP tError distributions were dramatically improved,
presumably because the old calibrations got the interERP TDC slope
wrong. I wonder why that was?

The improvements are primarily attributable to the new calibrations,
not to the new data.

In the old scheme, the calibrations did their best with every box
(e.g. 478 and 479) while the new scheme may give up on a box and leave
a bogus value in the calibration database. This meant that under the
old scheme, the calibrators and the analyzers did not have to agree on
microcuts -- in marginal cases, analyzers could leave 478 and 479 in
their analysis without too much negative consequence. Under the new
scheme, if the calibrators gave up on a box, it is essential for
analyzers to exclude the box; otherwise the bogus calibration values
will produce background in the analysis.

For timing offset calibrations, I guess the ideal scheme would be for
the calibrators to determine their microcuts, to calibrate all good
boxes without contamination from bad boxes (as they are doing now),
but then to add a step in which, after offsets for good boxes are
determined and fixed, the bad boxes are added back in and the best
constants are attained for bad boxes as well. However, this may not
be worth it at this point.

The old N/1 interERP data looked bad enough to cause me to exclude it.
The new N/1 data looks good enough that my microcut criteria would not
exclude it. However, there are 3 negative beta N/1 events in the data
which are almost certainly mistimed. Thus, ironically, the better
calibrations may lead to a worse 1/beta distribution.

8534.MASK excludes just about all of SM6 from interERP events;
however, I see no need for this from IERP-NEW-NEW, which has no
microcuts.

I am suspicious of the scheme to make microcuts that exclude
individual boxes from interERP events, rather than entire SM-pairs.
If you think about how the interERP circuit works, it is very unlikely
that a box that is working well for intraERP would work poorly for
interERP, unless the whole SM were screwed up in the interERP circuit.
Do you have evidence that individual boxes need to be excluded? Do
you have sufficient statistics to make the determination week by week?



This archive was generated by hypermail 2a24 : Fri Jun 16 2000 - 17:15:34 EDT