Clon01: Difference between revisions
No edit summary |
No edit summary |
||
Line 4: | Line 4: | ||
crontab crontab.txt | crontab crontab.txt | ||
Up-to-date cron job file is located in ''/var/spool/cron/crontabs/epics'', so if ''/var'' directory was saved before Solaris reinstalling you can check it out. | Up-to-date cron job file is located in ''/var/spool/cron/crontabs/epics'', so if ''/var'' directory was saved before Solaris reinstalling you can check it out. | ||
Establish passwordless connection with machine ''devsrv.acc.jlab.org'': login to ''clon01'' as ''epics'' and type | |||
ssh devsrv.acc.jlab.org | |||
Exit and ssh again, this time it should not ask for password. | |||
Revision as of 14:10, 4 February 2009
SUN Blade 2000, 2x900MHz Ultra SPARC-IIIi, 2GB RAM. Primary use - EPICS applications.
After Solaris installation and customization, start cronjobs from epics account. File is ~epics/cron/crontab.txt so cronjobs can be started by command
crontab crontab.txt
Up-to-date cron job file is located in /var/spool/cron/crontabs/epics, so if /var directory was saved before Solaris reinstalling you can check it out.
Establish passwordless connection with machine devsrv.acc.jlab.org: login to clon01 as epics and type
ssh devsrv.acc.jlab.org
Exit and ssh again, this time it should not ask for password.
old info - not valid
Data Mover (move2silo, EPICS)
SUN Blade 2000, 2x900MHz Ultra SPARC-IIIi, 2GB RAM ---+ CLON01
This is *Sun Blade 2000* dual CPU (SPARC IIIi) server which primary use in the *CLAS DAQ is Data Mover*. The secondary use of it is *EPICS* visualisation workstation.
This server runs also
* VERITAS Cluster Server * NIS Slave Server * INGRES Server * CLON Printer Server (for CLONHP and CLONHP2 printers)
---++ Data Mover Location: /usr/local/system/raid
Program: move2silo
Logs: logs/
To start it manually:
[root@clon01]$ cd /ssa [root@clon01]$ mv presilo1 gopresilo [root@clon01]$ move2silo >& /dev/null &
To move Data Mover to *CLON10* edit file */usr/local/system/raid/checkraid* uncommenting corresponding line:
... # for CLON10 # /usr/local/system/raid/move2silo >& /usr/local/system/raid/log & # for CLON01 rsh clon01 /usr/local/system/raid/move2silo ">&" /usr/local/system/raid/log "&" ...
---++ Memory Monitoring Tool *CEDIAG*
location: /opt/SUNWcest/bin/cediag
[root@clon01 ~]$ cediag cediag: Revision: 1.78 @ 2005/02/11 15:54:29 UTC cediag: Analysed System: SunOS 5.8 with KUP 117350-06 (MPR active) cediag: Pages Retired: 0 (0.00%) cediag: findings: 0 datapath fault message(s) found cediag: findings: 0 UE(s) found - there is no rule#3 match cediag: findings: 0 DIMMs with a failure pattern matching rule#4 cediag: findings: 0 DIMMs with a failure pattern matching rule#5