Clon01

From CLONWiki
Jump to navigation Jump to search

clon01 is the main boot server for VME, VTP and VxWorks controllers

Install Tftp and Tftpboot software.

Install VxWorks boot software.

Since we are customizing /etc/auto.direct, we have to disable puppet:

systemctl stop puppet
systemctl disable puppet

Create directory for automounts:

mkdir /mnt/admin

Add following section to the end of the /etc/auto.direct:

# clonfs mounts
/mnt/admin/clonfs1            -rw,bg  clonfs1:/
/mnt/admin/clonfs1a-old        -rw,bg  clonfs1a-old:/
/mnt/admin/clonfs1a-old-home   -rw,bg  clonfs1a-old:/vol/home
#
/mnt/admin/clonfs1-apps       -rw,bg  clonfs1:/vol/apps
/mnt/admin/clonfs1-archives   -rw,bg  clonfs1:/vol/archives
/mnt/admin/clonfs1-clas       -rw,bg  clonfs1:/vol/clas
/mnt/admin/clonfs1-clas12     -rw,bg  clonfs1:/vol/clas12
/mnt/admin/clonfs1-clonweb    -rw,bg  clonfs1:/vol/clonweb
/mnt/admin/clonfs1-clonwiki   -rw,bg  clonfs1:/vol/clonwiki
/mnt/admin/clonfs1-diskless   -rw,bg  clonfs1:/vol/diskless
/mnt/admin/clonfs1-downloads  -rw,bg  clonfs1:/vol/downloads
/mnt/admin/clonfs1-home       -rw,bg  clonfs1:/vol/home
/mnt/admin/clonfs1-local      -rw,bg  clonfs1:/vol/local
/mnt/admin/clonfs1-logs       -rw,bg  clonfs1:/vol/logs
/mnt/admin/clonfs1-mysql      -rw,bg  clonfs1:/vol/mysql
/mnt/admin/clonfs1-scratch    -rw,bg  clonfs1:/vol/scratch

Reload autofs:

systemctl reload autofs

Make useful symlink:

cd /
ln -s /mnt/admin/clonfs1-diskless diskless

old

SUN Ultra 24 Workstation: JLAB inventory number: F424093, SysSN: 0906FMB01R.

old: SUN Blade 2000, 2x900MHz Ultra SPARC-IIIi, 2GB RAM. Primary use - EPICS applications.

After Solaris installation and customization, start cronjobs from epics account. File is ~epics/cron/crontab.txt so cronjobs can be started by command

crontab crontab.txt

Up-to-date cron job file is located in /var/spool/cron/crontabs/epics, so if /var directory was saved before Solaris reinstalling you can check it out.

Establish passwordless connection with machine devsrv.acc.jlab.org: login to clon01 as epics and type

ssh  devsrv.acc.jlab.org

Exit and ssh again, this time it should not ask for password.


old info - not valid

Data Mover (move2silo, EPICS)

SUN Blade 2000, 2x900MHz Ultra SPARC-IIIi, 2GB RAM ---+ CLON01

This is *Sun Blade 2000* dual CPU (SPARC IIIi) server which primary use in the *CLAS DAQ is Data Mover*. The secondary use of it is *EPICS* visualisation workstation.

This server runs also

  * VERITAS Cluster Server
  * NIS Slave Server
  * INGRES Server
  * CLON Printer Server (for CLONHP and CLONHP2 printers)

---++ Data Mover Location: /usr/local/system/raid

Program: move2silo

Logs: logs/

To start it manually:

 
[root@clon01]$ cd /ssa
[root@clon01]$ mv presilo1 gopresilo
[root@clon01]$ move2silo >& /dev/null & 

To move Data Mover to *CLON10* edit file */usr/local/system/raid/checkraid* uncommenting corresponding line:

...
# for CLON10
#  /usr/local/system/raid/move2silo >& /usr/local/system/raid/log &

# for CLON01
  rsh clon01 /usr/local/system/raid/move2silo ">&" /usr/local/system/raid/log "&"
...

---++ Memory Monitoring Tool *CEDIAG*

location: /opt/SUNWcest/bin/cediag

[root@clon01 ~]$ cediag
cediag: Revision: 1.78 @ 2005/02/11 15:54:29 UTC
cediag: Analysed System: SunOS 5.8 with KUP 117350-06 (MPR active)
cediag: Pages Retired: 0 (0.00%)
cediag: findings: 0 datapath fault message(s) found
cediag: findings: 0 UE(s) found - there is no rule#3 match
cediag: findings: 0 DIMMs with a failure pattern matching rule#4
cediag: findings: 0 DIMMs with a failure pattern matching rule#5