Clon10: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
Generic server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM | |||
---+ *CLON10* | ---+ *CLON10* | ||
---++ General Information | ---++ General Information |
Revision as of 10:18, 14 January 2007
Generic server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM
---+ *CLON10* ---++ General Information
* [[1][Specifications]] * [[2][System Hand Book]]
---++ Basic tasks ---+++ Quick shutdown (with power down):
- init 5
(but you better stop Veritas Cluster first using "hastop" command) ---+++ Gracefull shutdown in 2 minutes (with power down):
- shutdown -y -i 5 -g 120
---+++ to reboot use:
- init 6
---+++ Console access available from any host with SSH client installed:
> ssh clon:clon10@hallb-ts1 > password: xxxxxxx
---++ Configuration
---+++ Network Interfaces ge0: 129.57.68.21 (clon10-daq1)
ge1: 129.57.167.14 (clon10) ---+++ Local Filesystems
/dev/dsk/c0t1d0s0 2GB 78% / /dev/dsk/c0t1d0s1 1GB 86% /var /dev/dsk/c0t1d0s3 2GB 21% /opt
---+++ NFS Server CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well. If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.
We are using CLON10 as NFS server to export RAID partitions to all other CLONs. Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them.
After Boot run
# sh /etc/dfs/dfstab.save
---++ VERITAS Tips ([[VeritasConfig][Configuration]]) Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
most binaries are located in /opt/VRTS/bin/
---+++ Shutdown Vertas Cluster on this host (as root only):
# hastop
---+++ on all hosts:
# hastop -all
---+++ Cluster status:
# hastatus -sum
---+++ Start cluster on the local node
# hastart
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
# cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
# cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
# fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++ Cluster communication:
primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0) on clon10 and eri0 (/dev/eri:0) on clon01
secondary channel - (Low Priority - lowpri, over public network) - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0) on clon01
---++ Data transfer to SILO: ---+++ Master Host
CLON01
---+++ program location
/usr/local/system/raid
---+++ program name
move2silo
---+++ log files
/usr/local/system/raid/logs
---+++ Manual start (as root from clon01):
# cd /ssa # mv presilo1 gopresilo # /usr/local/system/raid/move2silo >& /dev/null &
---+++ Manually move data from raid to SILO - example (as root from clon01):
Following command can be called several times for the same partition, it is harmless. After it finished make sure that all files are on /mss/clas/g8b/data.
# /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data