Small Angle GDH Runs Stuff
The page contains information and links to information about the data runs taken during saGDH.
Also available are all the scripts I used to extract and summarize EPICS data from start/end run summaries & the raw data files into a mySQL database (scroll down for more info).
Here are the scripts that I used to create the HALOG Archive Search Tool.
If you'd like me to add some information, send you information that is not listed here, and/or answer a question,
then send email to my jlab email account (singhj).
♠ denotes something that I am doing or need to do.
my page ♦ Official saGDH page ♦ saGDH HALOG Archive Search Tool ♦ Full HALOG Archive Search Tool
SCICOMP : [holmstro] ♦ [jyuan] ♦ [luhj] ♦ [singhj] ♦ [vasulk]
The accelerator schedules for 2003 are here.
Below is a table that gives a rough sketch of saGDH which took data in the summer of 2003.
Δ indicates that the corresponding quantity was changed during that time frame.
start finish first last period nominal cell energy polarization
date date run run angle name (MeV) moller
03/15 04/21 999 1174 Δ Δ Δ Δ
04/21 04/28 1175 1379 1 6 proteus 1149 0.741
04/28 04/30 1380 1411 1 6 proteus Δ
04/30 05/07 1412 1788 1 6 proteus 1542 0.781
05/07 05/08 1789 1799 1 6 proteus Δ
05/08 05/16 1800 2059 1 6 proteus 1149 0.742
05/16 05/17 2060 2061 1 6 proteus Δ
05/17 05/19 2062 2195 1 6 proteus 2237 0.689
05/19 05/20 2196 2197 1 6 proteus Δ
05/20 05/22 2198 2257 1 6 proteus 3325 0.740
05/22 07/15 2258 2293 Δ 6 Δ Δ
07/15 07/23 2294 2709 2 6 penelope 2134 0.742
07/23 07/23 - - 2 6 Δ Δ
07/23 07/24 2710 2777 2 6 priapus 1096 -
07/24 07/25 2778 2781 2 6 priapus Δ
07/25 07/28 2782 2948 2 6 priapus 4209 0.652
07/28 07/28 - - 2 6 priapus Δ
07/28 07/30 2949 3059 2 6 priapus 2135 0.747
07/30 07/31 3060 3060 2 6 priapus Δ
07/31 08/05 3061 3260 2 6 priapus 2845 0.767
08/05 08/08 3261 3265 2 Δ priapus Δ
08/08 08/13 3266 3487 2 9 priapus 3775 0.765
08/13 08/14 - - 2 9 priapus Δ
08/14 08/16 3488 3641 2 9 priapus 1147 0.762
08/16 08/16 - - 2 9 priapus Δ
08/16 08/21 3642 3850 2 9 priapus 2234 0.761
08/21 08/21 - - 2 9 priapus Δ
08/21 08/25 3851 4066 2 9 priapus 4404 0.756
08/25 08/26 - - 2 9 priapus Δ
08/26 08/30 4067 4218 2 9 priapus 3319 0.778
Dmitri was kind enough to email me the shift schedule.
♠ All sorts of interesting shift statistics will be listed eventually!
An old and out of date "Cell Data Table" web page is located here. ♠ I'm working on an update to that.
EPICS information from the Start/End Run Summaries from the HALOG and MSS
Note that the most complete set of start/end summaries are located in the MSS as a massive tar file: /mss/halla/gdh/raw/runfiles_gdh.tar
...And for the left arm runs: /mss/halla/gdh/raw/runfiles_Left_gdh.tar
Links to the start/end of run summary HALOG entries by run number:
Right arm start/end run summary links
Left arm start/end run summary links
Note that a single run could have anywhere from 0 to 9 HALOG entries for the start/end of run summary.
A run not having a start (end) summary does *not* necessarily mean that if will not have an end (start) summary.
The following are tab separated columns of all the variables in the start/end of run summaries for the left and right arms from
the HALOG and from the tar file in the MSS:
Right Arm Start from HALOG : Start from MSS : Key
Right Arm End from HALOG : End from MSS : Key
Left Arm Start from HALOG : : Key
Left Arm End from HALOG : : Key
The "*.key" file that goes with each "*raw.html" file indicates the column headings.
A list version of the column labels for the right arm start/end run summaries is here.
Null values are denoted by "\N" without the quotes. Unknown values are denoted by "?" without the quotes.
To create a smaller file with just the info you need, try this sample awk script with sample output.
-1. Please read the comments in each of the following files to determine how to modify them for your purposes!
0. If you have the "runfiles_gdh.tar" file, then all you need to do is (1) untar it and (2) run this perl script for the start summaries
and this one for the end summaries.
1. Otherwise, to generate a tab separated column run summary variable file like the ones listed above from the HALOG entries, you'll first
need an awk script that will:
a. identify an html file as being a start/end of run summary
b. find the run number that the file refers to
c. append the list of run numbers with the associated filenames
2. You'll then need to run a batch file to apply this awk script to every HALOG file in the time period of interest.
3. Now that you have a list of run numbers with the corresponding filenames for the run summary, you'll need a perl script
to pull out the information from each file.
4. You'll need a "*.key" file so that you know what data each column contains. Run this perl script to do it.
5. Finally, you'll want to search and replace the links in the data file to point to
5. If it all worked, you should have the followings files:
a. "right.end" = a list of right arm run numbers and the filenames for their corresponding end of run summary HALOG entries
b. "rightend.raw" = a tab separated column summary file with every variable in the right arm end of run summary
c. "temp.key" = a key to data displayed in each column within "rightend.raw"
Note that the information in these summaries is *not* always accurate because sometimes the start/end EPICS logger
scripts do not finish until long after the run ended.
Therefore some of the EPICS information is recorded under different settings than from during the run.
In addition, sometimes the shift workers do not update the entry form that pops up at the start and end of each run.
♠ A list of runs for which this EPICS data is not reliable and statistics illustrating the scope of this problem are forthcoming.
Summary file that don't include the word "raw" in there name have been corrected for these inaccuracies by
comparing to paper summaries and HALOGed shift summaries *by hand*.
Links to most owl/day/swing shift summaries (plus a few other important notes) are listed chronologically here.
Links to HALOG archive by month...
2003 (0303) March dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 999-1132
2003 (0304) April ----- 1 ----- dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 1133-1422
2003 (0305) May 1 dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 1422-2257
2003 (0306) June -------------- dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 2258-2259
2003 (0307) July 2 dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 2260-3074
2003 (0308) August ---- 2 ----- dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting † runs 3074-4218
2003 (0309) September dir ‡ all ‡ all *except* automatic entries ‡ run start ‡ run end ‡ beam time accounting
EPICS information in the Data Stream
Important EPICS data is periodically injected into the data stream.
These values are timestamped and recorded into a raw data file in ASCII format (whereas the raw data is in binary format).
Since I didn't know what variables were being logged, I wrote this script to find all possible EPICS variable names.
A list of all EPICS variable names with (best guess) descriptions and approximate read time periods is here.
This script can be used to pull out the all the EPICS variables from a single raw data file. It also pulls out the prescalar settings.
The format of the output file is:
timestamp in year-mo-da hr:mi:se format tab seconds from 00:00:00 Jan 1 2003 tab variable name tab value
The above script creates an epics data text file for each split raw data file for
each run with the name gdh_[run number]_[split file number]_epics_data.txt.
To combine all the epics data text files for a single run in chronological and alphabetical order, use this script.
This script will combine files for all the runs at once.
The resulting file name will be gdh_[run number]_epics_data.txt, and they will all be created in the
"combined" sub-directory (automatically created by the script).
Once the combined files have been created, use this script to create summary files.
Note that this summary gives inaccurate information for beam related quantities.
This is because, during beam trips, beam related quantities often give nonsensical values.
Therefore this script creates summaries with cuts on beam position and this one creates summaries with cuts on beam current.
When the beam trips, the various bpms read "0" and the beam current reads below 0.1 μA.
Data is cut 6.1 seconds before to 30.1 seconds after a beam trip.
These scripts will only summary one epics data text file at a time.
Therefore execute this shell script to summarize all the data files in the "combined" sub-directory.
These summary files will be named gdh_[run number]_epics_data_summary?????.txt and created in the "summaries" sub-directory (automatically
created by the shell script), where ????? will be nothing, _wbpm, or _wbcm depending on which summary script was used.
The first line of the summary file is the prescalar string.
The next group of lines are a statistical summary of all the numerical epics variables in the following format:
epics variable name tab number of appearances tab minimum value tab maximum value tab> mean tab population standard deviation
The numerical variables are separated from the alphnumerical variables with the line "-----"
The alphanumerical values are summarized in the following way:
epics variable name tab number of appearances tab first value tab number of occurences for this first value tab second value tab number of occurences for this second value ...
For those summary files that were created using beam trip cuts, there is an additional section, separated by the line "-----",
that contains the following information:
trip line count=
trip line check count=
trip line found count=
approx number of trips=
approx total run time(sec)=
approx beam off time (sec)=
approx run time after beam cuts (sec) =
Finally this script creates one large tab separated data file with each row corresponding to each run and each column corresponding to each
numeric epics variable statistic (count,min,max,mean,std).
Every so often, apparently for no reason, an EPICS readback will read "0". These are included in the statistics, because the problem was discovered much later.
This script also creates a text file with all the column names in order.
The following perl script makes this "load data" sql script for insertion into a table in a mysql database. Note that these are easily
modified to accommodate the data sets with beam trips.
If you want the epics data files for each run and the summary files for each run (only non-junk runs),then you'll have to look on the gdh-2 work disk on the
farm (b/c my account is not big enough):
/work/halla/gdh-2/epics_data_text_files/log_for_each_raw_data_file/ epics data log text file for each raw data file
/work/halla/gdh-2/epics_data_text_files/log_for_each_run/ epics data log file combined and sorted for each run number
/work/halla/gdh-2/epics_data_text_files/summary_for_each_run/ epics data summary statistics file for each run number
EPICS information in the Parity Data Stream
During the experiment, the parity DAQ was running simultaneously with the left/right HRS DAQ. The raw data files for parity runs also include EPICS information inserted into
the data stream every 12 to 14 seconds. The same set of scripts listed above can be used to extract this data. Time synchronization with right HRS DAQ runs is done within mysql.
A list of variables in the raw parity data files is located here. I have checked the IHWP settings using the parity epics data
stream and it all looks correct. ♠ I still need to incorporate the slit position, hall A current, and hall C current epics info from the parity data.
Run Filenames on the MSS
The raw data files for saGDH are located at: /mss/halla/gdh/raw/
A listing of the files in that directory is available here.
The raw data filenames are of the format: gdh_[run number].dat.[split file number] (for example: gdh_1802.dat.0)
Right arm run numbers range from 999 to 4218. Run numbers over 20000 are for the left arm.
If a particular raw data file exceeds a certain size, then the run is split and a new data file is created with an incremented split file number.
Split file numbers start from 0 (most runs) and can be as high as 7 (run 1801):
number of splittings 1st 2nd
0 998 1829
1 169 42
2 52 0
3 5 0
4 1 0
5 1 0
6 0 0
7 1 0
The number of times a run was split listed by run number is available here (left) and here (right).
Some runs that "do not exist" (-1) according to the list above are actually pedestal runs, which are listed here.
Except for run 999 (gdh_ped_999.dat.0), the filename format for a pedestal run is: gdh_ped_[run number].dat
During the second period, one of the data disks died. This caused us to lose three runs (3683,3692, and 3701).
After the disk died, every ninth run data filename would be skipped.
This should mean that every 9th run number from 3709 to 4034 is missing.
However split runs count as a "run data filename." Therefore the "skip 9th run number" rule does not hold when a run between skipped run numbers is split.
The saGDH MySQL Database
The database is hosted on "uvapos1.jlab.org" and accepts queries from "*.jlab.org" hosts.
It includes all of the raw information from the start/end of run summaries and all of the information listed below.
I have scripts that I use to update this database and generate summaries from this database.
Access to this database over the network from onsite is allowed, just follow these instructions and play around with this file.
Get the password from me or Vince.
It can be accessed interactively, through PERL, or directly through the analyzer/ROOT.
I wrote a perl script to query the database and generate the "db_run.dat" file needed for the analyzer.
It does not use the DBI module, because I coundn't get it to work in the CUE.
Therefore, you must have the ".my.cnf" file setup correctly in order to use this script.
I wrote this very simple C library that can be added to an analyzer/ROOT macro and query for specific run info, such as beam and target polarizations.
Details on how to compile, load, and use it are decribed in the comments within the file.
Here is a tarball of SQL scripts that I wrote to setup the database and do various things, such as finding the refcell runs for each kinematic.
I did not include that data files because they are too big; I make these files available for reference purposes.
A list of columns stored in the "summary" table on a run by run basis is listed here.
Descriptions of the parameters in the master summary tables are listed below.
Run Flag, Type, and Comments
Each run is flagged with one of the following numbers (other columns give approximate fraction of runs):
flag 1st 2nd tot description
-1 0.03 0.04 0.03 data file does not exist: skipped run number or data file lost in data disk crash
0 0.22 0.12 0.16 junk run, not useful for anything: high deadtime, DAQ crash, ...
+1 0.11 0.04 0.07 secondary run, for example: run was taken for logging EPICS data over some time period, cosmics, BPM/BCM calibration, ...
+2 0.20 0.06 0.12 primary run with some problems, run taken for physics analysis but some problem: bad beam position, unknown target polarization, ...
+3 0.44 0.74 0.62 primary run with no obvious problems logged in HALOG/shift summary/paper summary
The impetus for these flags was Tim. He also did a first iteration. "Some problems" include:
(1) unusually set prescale factors
(2) notes in Halog/Shift Summarys/Paper Run Summaries
(3) spectrometer momentum drifting up or down during the run by more than 500 ppm
(4) beam position RMS in X or Y direction in bpm A or B exceeds 250 microns
(5) refcell pressures not recorded
(6) refcell being filled/evacuated during the last few minutes of the run
(7) septum setpoint changed during the last few minutes of the run
Each run is labeled by one of the following types (only for runs with flags +1 or higher):
runtype 1st 2nd tot description
acc 0.119 0.124 0.122 acceptance data, taken with and without the target collimators
bcm 0.000 0.002 0.002 taken for bcm calibration
bs 0.229 0.000 0.084 background studies (first period only)
bull 0.010 0.007 0.008 bull's eye scan for bpm calibration/check
cos 0.058 0.021 0.035 cosmics
det 0.000 0.005 0.003 detector calibration/check run
ela 0.051 0.063 0.059 elastic kinematic data
harp 0.001 0.002 0.002 harp run taken to for bpm calibration/check
opt 0.170 0.070 0.106 optics data, taken with a sieve slit
oth 0.037 0.018 0.025 daq tests, detector tests, miscellaneous ...
pc 0.010 0.012 0.012 pressure curve data
ped 0.007 0.004 0.005 pedestal runs
pro 0.307 0.671 0.538 production runs
Some runs can be of more than one type even though they are only labeled by a single designation.
♠ There were some carbon data taken during some beam energy that was used to set the septum current. I labeled them as acceptance.
Background studies were taken when we were trying to diagnose the right septum problem during the first period.
The run comments are those from the start/end of run entry form that is supposed to be filled out accurately by shift workers.
♠ I made a nominal effort to correct these comments, but I can't guarantee that they are all correct.
Junk runs are labeled as such and again I made an effort to include why (usually wrong prescale).
I have added comments from the paper run summaries and the target logbook. These comments appear at the beginning
and are separated from the rest of the comments by semicolons.
The following comments are also present:
flag: -1 0 1 2 3 description
nobeg 0 6 9 0 14 run is missing *only* the start run summary in HALOG
noend 6 33 15 14 40 run is missing *only* the end run summary in HALOG
nowww 100 130 21 2 2 run is missing *both* start and end run summaries in HALOG
Note that a start/end run summary that is missing from the HALOG *often* times exists in the MSS "runfile_gdh.tar"
For saGDH, the prescalar file read:
; RIGHT SPECTROMETER on adaql2 computer on a-onl account.
; Prescale factors (integer), downloaded each run at prestart.
; Do NOT put spaces in string, NOR add 2nd string. (REALLY!!!)
; The default factor is essentially infinite.
; ps8 = 65535 turns off the 1 kHz pulser on T8.
; Keep enough T2 to measure efficiency
; PS1 for the Right spectrometer T1 defined by S1.and.S2
; PS2 for the Right spectrometer T2 defined by Cerenkov/S1/S2 majority
; PS8 for 1024 Hz pulser ( keep this at <= 100)
; Trig 9 = 30 Hz, and you cannot prescale it.
However, for the followings runs, the prescalar string did not have the standard format:
2434: ps1=1 ,ps2=1 ,ps3=9999,ps4=9999,ps5=65535,ps6=65535,ps7=65535,ps8=65535
2435: ps1=5 ,ps2=1 ,ps3=9999,ps4=9999,ps5=65535,ps6=65535,ps7=65535,ps8=65535
After replaying these runs with the analyzer, I found that these runs were assumed to have the following prescale factors:
Lessons learned about how the analyzer interprets the "prescaler string" for saGDH:
(1) Fractional precales are truncated to integers.
(2) Only the first definition is used.
(3) A ps that is not defined is set to 2^24.
(4) Spaces seem to be okay.
Here is a tab separated file with the prescale factors listed by run and in order for all non-pedestal and non-junk runs.
There are four EPICS variables for Beam Energy (all in units of MeV):
(1) "HALLA:p" (run summary and data stream)
(2) "Tiefenbach Hall A energy" (run summary)
(3) "halla_MeV" (data stream)
(4) "MBSY1c Hall A Beam energy" (run summary)
(5) "MBSY1C_energy" (data stream)
I believe that (2) and (3) are the same variable with two different names.
Also, I believe that (4) and (5) are the same variable with two different names.
first last ebeam HALLA:p Tiefenbach MBSY1c
run run (MeV) (MeV) (MeV) (MeV)
999 1174 Δ - - -
1175 1175 1149 NULL 1147.87 F1146.47
1176 1189 1149 NULL 1148.69 F1150.19
1190 1379 1149 NULL F1148.79 F1150.19
1380 1411 Δ - - -
1412 1736 1542E NULL F1148.79 F1543.20
1737 1788 1542 1541.95 F1148.79 F1543.20
1789 1799 Δ - - -
1800 1801 1149 1148.07 F1148.79 F1148.99
1802 1922 1149E 1148.96 F1148.79 F1149.79
1923 2059 1149 F1148.87 F1148.79 F1149.79
2060 2061 Δ - - -
2062 2188 2237E F1148.87 F1148.79 F2239.51
2189 2195 2237 F1148.87 2236.89 F2239.51
2196 2197 Δ - - -
2198 2219 3325 F1148.87 3324.36 F3330.15
2220 2257 3325 F3324.92 3324.99 F3330.15
2258 2293 Δ - - -
2294 2696 2134E 2134.20 0 F2136.56
2697 2709 2134 F2134.16 0 F2136.56
2710 2777 1096* F2134.16 0 F1096.81
2778 2781 Δ - - -
2782 2782 4209 F2134.16 0 F4220.94
2783 2798 4209 F2134.16 0 F4216.94
2799 2948 4209 4208.75 0 F4216.94
2949 3059 2135eP 2134.87 0 F2137.40
3060 3060 Δ - - -
3061 3260 2845E 2844.79 0 F2849.17
3261 3265 Δ - - -
3266 3487 3775 3775.42 0 F3781.77
3488 3494 1147 1147.26 0 F1147.83
3495 3515 1147 1147.28 0 F1148.23
3516 3641 1147E 1147.29 1147.29 F1148.23
3642 3850 2234E 2233.90 2233.91 F2235.91
3851 3870 4404 4405.38 4405.38 F4415.31
3871 3904 4404 4404.18 4404.18 F4413.19
3905 3974 4404 4404.51 4404.54 F4415.33
3975 4066 4404 4403.99 4403.99 F4412.92
4067 4218 3319 3318.80 3318.80 F3324.14
Δ = beam energy change
F = Value appears fixed or frozen
E = Helium-3 elastic data was taken for this setting
eP = eP measurement was taken for this setting
Italicized rows are those for which a run by run beam energy does not exist from the run summary data.
* = for runs 2710 to 2777, all the beam energy EPICS variables were fixed or frozen. The MBSY1c value appears to be a
setpoint that is systematically a little higher than the readback values for all other runs. Fitting the average readback
values vs the MBSY1c value for all beam energies to a second order polynomial gives:
<[HALLA:p]>/[MBSY1c] = cn[MBSY1c]n
c0 = (+0.99943E+00 ± 0.25284E-02)
c1 = (-0.14644E-06 ± 0.26444E-07) MeV-1
c2 = (-0.48418E-10 ± 0.46468E-11) MeV-2
Therefore, MBSY1c = 1096.87 MeV gives [HALLA:p]fit = (1095.94 ± 0.14) MeV.
For the above values:
For runs 3516 to 4218, the "Tiefenbach" value is identical to the "HALLA:p" value within ± 0.01 MeV.
Values for "HALLA:p" and "Tiefenbach" that fluctuate from run to run are averaged from the run summary.
Calculated over one beam energy setting, the standard deviation of these quantites would vary from 0.2 MeV to 0.8 MeV from setting to setting.
The "ebeam" value is a rounded average over some combination of "unfrozen" values from the "Tiefenbach" and "HALLA:p" EPICS variables.
Because a single epics variable was not stable for the whole data taking period, the following quantities are used for the beam energy
on a run by run basis in order of preference:
(1) average value of Tiefenbach energy logged over a run
(2) average value of "HALLA:p" logged over a run
(3) average value of the Tiefenbach energy logged over adjacent runs when Tiefenbach is "frozen" for that run
(4) average value of "HALLA:p" logged over adjacent runs when "HALLA:p" is "frozen" for that run
(5) the fit value of "HALLA:p" from the MBSY1C energy setting
♠ As Vince has pointed out, the elastic data has shown that the beam energy is sometimes shifted by a few MeV. This will have to be sorted out eventually.
Bodo did an eP beam energy measurement once (beam positions during measurement): 2135.67 MeV ± 0.20 MeV (stat) ± 0.46 MeV (sys). This value is "final."
Other beam energy measurements can be found here.
♠ I need to talk to someone on the accelerator side to get the beam energy for certain time periods.
Beam Polarization and Bleedthrough
All 2003 Moller polarization raw results are listed here.
The above list is the only place where first period Moller results are listed. Second period Moller final results are available here.
Links to all Compton runs (operational during the second period only):
runs during 01/30/03-05/12/03
runs during 05/12/03-07/28/03
runs during 07/28/03-08/20/03
runs during 08/20/03-06/27/04
♠ A run by run Compton list will be available when I get some feedback from experts on what a "good Compton production run" means.
The slit position information was obtained for the first period by Vince through MCC. For the second period, it was a combination of
ELOG entries labelled "Machine Update" or "Bleedthrough Measurements" and from Kathy's epics logger.
♠ At a minimum, I need to get more thorough info for:
Hall A current from 07/14/03 to 07/19/03
Hall C current from 04/20/03 to 07/19/03
Hall A slit position from 07/14/03 to 08/06/03
The formulas for calculating the bleedthrough for the second period from the hall A current,
hall C current, and hall A slit position were empirically determined by Tim Holmstrom.
During the down to move the septum from 6 degrees to 9 degrees, this (ELOG 1165175) happened, which we think explains the change in the bleedthrough formula.
The bottom line is that something in the beam configuration changed between the first period and the second period
and again between the 6 degree and 9 degree running of the second period.
The corrections for the first period are done by taking an average of the two formulas for the second period
and the uncertaintly in the correction will be the difference of the two formulas.
The correction is given by:
P^A_corr = P^A_pure - (B/100)*(P^A_pure - P^C)
P^A_corr = true polarization of the actual beam in hall A
P^A_pure = polarization of the beam in hall A if the hall C laser is off
P^C = polarization of the beam in hall A if the hall A laser is off
B = *percentage* of hall A bcm reading that is hall C current in hall A
R = ratio of the hall C bcm reading to the hall A bcm reading (unitless)
S = slit position (in arb. units?)
The EPICS variable names for the Hall A slit position, attenuator setting, and laser setting are smrposa, psub_aa_pos, R00LPMESA.
Insertable Beam Half Waveplate
As a first pass, I have transcribed the BHWP settings from the paper runsheets into a text file and loaded it in to the database.
♠ I have to cross check these with the halog, elog, and shift summaries. The BHWP is *not* logged into the datastream.
It is however logged into the parity datastream. The BHWP setting has been cross checked with the EPICS variables IGL1I00DI24_24M and IGL1I00OD16_16 from
the parity runs.
Tracking the relative sign for the beam polarization is done as follows:
(1) Take the sign from the Moller measurement and multiply by -1 if the BHWP = IN during the Moller measurement
(2) Multiply by +1 if BHWP = OUT and -1 if BHWP = IN
The absolute sign will be determined by looking at the asymmetry of the Delta.
The beam positions are measured by two BPMS: IPM1H04A and IPM1H04B.
The coordinate system of the BPMS are such that:
positive x points to the left spectrometer
positive y points up
positive z points to the beam dump
BPMA is located at z_A = -7.524 m (upstream of the target), while BPMB is located at z_B = -1.286 m (upstream of the target). [J. Alcorn, et al NIM A 522 p294-346 (2004)]:
The postion at the target can be obtained by projecting linearly through BPMA and BPMB:
u_target = m*(u_A) + (1-m)*(u_B)
m = z_B/(z_B-z_A) = -0.206156
and u = the X or Y position of the beam.
The average and RMS values of the beam positions at both BPMS in both X and Y directions are calculated with no cuts, cuts on "0" BPM readings, and cuts on beam current.
The average values over a run are projected to the target to get a "global" sense of the beam position during a run.
Right Spectrometer Momentum
The central momentum setting of the right spectrometer is determined from the field in the dipole as measured by the NMR probe.
The dipole NMR field value (B) is converted into the central momentum (p0) using the following formula [J. Alcorn, et al NIM A 522 p294-346 (2004)]:
p0 = ΓnBn
Γ1 = +2698 MeV/T
Γ2 = 0.0 MeV/T2
Γ3 = -1.6 MeV/T3
The fields are averaged over all runs for a particular kinematic setting to give the nominal central momentum for those runs.
Run by run, the field values in the data steam are averaged over each run to give the central momentum for that run.
There was a time when the field readback or the field itself was unstable. These runs (4058-4066) may have some uncertainty
in the spectrometer momentum and are labeled as such.
Right Septum Readback Current
There were three currents on the Septum Conrols GUI:
MSEPRI - septum readback current
MSEPRIR - septum power supply readback current
MSEPRSETI - septum set current
The start/end of run summaries log the "readback" current.
Vince has pointed out that the value logged in the data steam is actually the "power supply readback."
The "PS" value is not useful, because we used the "readback" value to set the septum to the appropriate momentum setting.
Therefore the start/end of run summary "readback" value is used *only* when start/end of run summary value
for the "set" value matches the set value recorded in the data stream.
Note that the septum set point was changed during the last last 1-2 minutes of runs 3343, 3344, and 3671.
Each run (only for runs with flags +2 or higher) was taken with one of the following target configurations:
target1 target2 1st 2nd tot description
carbon all 84 149 233 6 deg: sls; 9 deg (before run 3905): sslss; 9 deg (after run 3904) lslsl
carbon some 205 64 269 6 deg: l ; 9 deg (before run 3905): l ; 9 deg (after run 3904) l l l
none NULL 148 75 223 nothing in the beamline except for 1 atm (nominal) of natural abundance Helium gas
pol3he 0 148 573 721 polarized helium-3 cell with target spin = 0 degrees (longitudinal)
pol3he 180 30 133 163 polarized helium-3 cell with target spin = 180 degrees (longitudinal)
pol3he 270 93 309 402 polarized helium-3 cell with target spin = 270 degrees (transverse)
refcell empty 57 122 179 reference cell at vacuum (less than 1 torr)
refcell n2 49 120 169 reference cell filled with operator chosen pressure of N2 gas
refcell 3he 4 9 13 reference cell filled with operator chosen pressure of 3He gas
These have been checked by hand with shift summaries/paper run summaries.
There is a ton of mostly target EPICS information that Kathy logged every 10 seconds.
♠ I need to check that each field configuration had the correct Target Half WavePlate position.
♠ A list of target tests and polarization measurements order by time is available here.
The target polarization is listed on a run by run basis using a linear interpolation between the most recent and soonest NMR measurement from the run.
♠ Note that on 08/22/03 some time between 1010 and 1801, the polarization mysteriously dropped from 42% to 22%. During this time period,
the target polarization is not known well.
Sign convention for the target polarization depends on the direction that the target *spin* is pointing.
Since the magnetic moment of the Helium-3 nucleus is negative and the neutron
spin (for the most part) points in the same direction as the Helium-3 nuclear spin, the direction of the neutron spin in the Helium-3 nucleus
points in the opposite direction of the holding field.
A_parallel is defined with the target spin pointing in the direction of propagation of the beam.
A_perpendicular is defined with the target spin pointing in the direction of the scattered electron.
Putting all this toghether defines the following sign convention for the target polarization:
0 deg long holding field points towards the beam dump = -1 target spin
90 deg trans holding field points towards the right HRS = -1 target spin
180 deg long holding field points towards the compton = +1 target spin
270 deg trans holding field points towards the left HRS = +1 target spin
Reference Cell Pressures
The values from the EPICS data stream are unreliable b/c of the flaky ADC card.
The values fluctuate with an RMS of about 3 psi.
The "by hand" values are obtained the reference cell pressures from the HALOG/shift summary/paper run summary/target logbook.
I have calibrated the EPICS readback value for the refcell pressure against the readings
recorded by hand:
(1) First the baseline offset is compensated by subtracting off the pressure reading from EPICS from the nearest empty reference cell run.
(2) The resulting value is fit to a line against the "hand" recorded value:
p_hand = (p_EPICS)*m + b
m (1st period) = 0.9919 b (1st period) = -1.923 psi
m (2nd period) = 0.9857 b (2nd period) = +0.3833 psi
The pressures recorded by hand are in psig, which means they are pressures relative to atmospheric pressure (14.696 psi at STP).
Runs 2057 and 3632 do not have pressures listed, so the above forumlas are used.
For all other runs, an average of the the "hand" values and formula corrected "EPICS" values are used.
♠ Refcell runs taken during swing shift 08/15/03 (3576, 3583, and 3590) have some problems that I don't understand yet.
Obtaining the density of the gas requires knowledge of the temperaure. The refcell RTDs wereeither not working or not installed during this experiment.
Therefore we'll assume a beam ON and oven ON temperature of about (30 ± 5) Celsius.
The density of the refcell (ρ) is calculated from the read pressure (Pread) using this formula:
ρ = P_refcell/(16.31 psia/amagats)
[density units are amagats]
The refcell was not leaky during saGDH, however the pressure reading would be different by as much a 3% whether the beam was ON or OFF.
Empty reference cells will assumed to be actually empty (evacuated to vacuum pressure).
For all optics/acceptance runs, Vince and I, independently, checked the raster settings from the shift summaries/paper run summaries.
All pol3he and refcell runs are assumed to have raster ON, because that is one of the precondtions for having taking beam on these targets.
♠ There are some runs for which the raster status is unknown. I need to check the raster status by looking at the data itself.
An old and out of date "Cell Data Table" web page is located here. ♠ I'm working on an update to that, but note the following:
Densities are in amagats and were provided by A. Tobias and V. Nelyubin. Uncertainties are forthcoming.
Some wall thickness info for the second run is located in this HALOG. For more info, see Radiation Thickness note in the next section.
||[He3] avg D1/D2
||[He3] from fill
||[He3] fin. avg.
||[N2] from fill
Radiation Thickness and Collisional Loss Data
See this note.
For the composition, density, index of refraction, radiation length, Bethe-Bloch parameters,
and density effect parameters of GE180 glass and Corning 1720 (C1720) glass, go here.