ATCA
ATCA electronics for HPS run designed in SLAC
- packages needed:
yum install cppzmq-devel
- SLAC confluence instructions for EEL Setup
- serial connection to atca1 crate:
root/HpsRoot 115200 Ifconfig eth0 down Ifconfig eth0 up 129.57.86.147
- DTMs/DPMs nfs mount
Clonfarm1 machine is used as file server for DTMs/DPMs, because they are not supporting newer nfs versions required by modern file servers like clonfs1. Make sure nfs server is set and running on clonfarm1. File /etc/exports suppose to contain following lines:
/data/hps/slac_svt/daq 129.57.0.0/16(rw,sync,no_subtree_check,no_root_squash) #/data/hps/slac_svt/cache 129.57.0.0/16(rw,sync,no_subtree_check,no_root_squash) /data/hps/slac_svt/server 129.57.0.0/16(rw,sync,no_subtree_check,no_root_squash) /data/hps/slac_svt/diskless 129.57.0.0/16(rw,insecure,async,no_subtree_check,no_root_squash)
Run following to make sure, restart if needed:
systemctl status rpcbind systemctl status nfs-server
- DTMs/DPMs boot
They are dhcp-boot devices, all settings are done by jlab computer center (Carl Bolitho). After boot, check it by logging in, for example ssh root@dtm1 with pass root (be patient, may take some seconds to login).
- some useful scripts in $CLAS/slac_svt/svtdaq/daqV2/rceScripts/:
reboot_cobs.sh - crate power recycle shutdown_cobs.sh - turn power off (can be turned back on by reboot_cobs.sh) rem_udp_server.sh - ... (have to run it) rem_control_server.sh - start GUI server start_gui.sh - start gui
EEL building test setup
machine used: clondaq10 with TIPCIe card to connect atca blades, and svt3 as master if clondaq10 is slave run as 'hpsrun', pass the same as clasrun EXPID=hpsrun, SESSION=clastest10 coda config: clondaq10_er, svt3_clondaq10_er trigger files: HPS/EEL2021/hps_v1_noThr_TOPRCE_TIMaster.trg, HPS/EEL2021/hps_v1_noThr_TOPRCE_VMEMaster.trg data files monitoring: $CODA/src/bosio/main/evio_hpssvthist.c, kumac hps.kumac
Login and use of the test setup at SLAC
Tunneling to and starting VNC server at SLAC:
ssh -L 5905:localhost:1234 jlabl1 ssh -L 1234:localhost:5905 rdsrv117.slac.stanford.edu vncserver :5 -geometry 1800x1000 -localhost -nolisten tcp top
vncserver -kill :5
To have write access on develpc:
ssh clasrun@develpc sudo tcsh
Running DAQ:
clasrun account can get root access with: sudo bash
ssh clasrun@slac1 killall runcontrol rcServer rocs (cleanup just in case) runcontrol -rocs -log click on "rocs" tab on the right-top of the screen to observe built-in xterms click "Configure", pick configuration "slac", processes have to be started in xterms click "Download" and pick trigger file "slac1.trg" click "Prestart" click "Go"
- trigger file "slac1.trg" is located in default directory
$CLON_PARMS/trigger/. It contains electronics settings overloading defaults from $CLON_PARMS/ti, $CLON_PARMS/fadc250/ etc. In particular
TI_RANDOM_TRIGGER 1 3
enables TI internal random pulser with prescale 2**3, with prescale 3 event rate will be about 50kHz.
Rogue GUI:
ssh clasrun@develpc bash cd /u1/hps/server/heavy-photon-daq/software conda activate rogue-hps-dev-rth python scripts/SvtCodaRun.py
Reboot RCE:
source /u1/hps/setup_env.sh cob_rce_reset dev-crate1-sm/2
RCE boot status:
cob_dump -rce dev-crate1-sm
Log files if SLAC guys running daq:
/home/clasrun/Xterm.log.slac1.2019.06.07.14.44.29.pE87M0