Clon10: Difference between revisions

From CLONWiki
Jump to navigation Jump to search
Boiarino (talk | contribs)
No edit summary
Boiarino (talk | contribs)
No edit summary
Line 1: Line 1:
Generic server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM
General purpose server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM


---+ *CLON10*
General Information:
---++ General Information
[[http://www.sun.com/servers/midrange/e3500/specs.xml][Specifications]]
  * [[http://www.sun.com/servers/midrange/e3500/specs.xml][Specifications]]
[[http://sunsolve.sun.com/handbook_pub/Systems/E3500/E3500.html][System Hand Book]]
  * [[http://sunsolve.sun.com/handbook_pub/Systems/E3500/E3500.html][System Hand Book]]
---++ Basic tasks
---+++ Quick shutdown (with power down):
# init 5


(but you better stop Veritas Cluster first using "hastop" command)
Basic tasks:
---+++ Gracefull shutdown in 2 minutes (with power down):
# shutdown -y -i 5 -g 120
---+++ to reboot use:
# init 6
---+++ Console access
available from any host with SSH client installed:
<pre>
> ssh clon:clon10@hallb-ts1
> password: xxxxxxx
</pre>
---++ Configuration


---+++ Network Interfaces
Quick shutdown (with power down):
ge0: 129.57.68.21  (clon10-daq1)


ge1: 129.57.167.14 (clon10)
  # init 5
---+++ Local Filesystems
    (but you better stop Veritas Cluster first using "hastop" command)
<pre>
/dev/dsk/c0t1d0s0    2GB    78%    /
/dev/dsk/c0t1d0s1    1GB    86%    /var
/dev/dsk/c0t1d0s3    2GB    21%    /opt
</pre>
---+++ NFS Server
CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.


We are using CLON10 as NFS server to export RAID partitions to all other CLONs.
Gracefull shutdown in 2 minutes (with power down):
Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them.
  # shutdown -y -i 5 -g 120
After Boot run<pre>
 
# sh /etc/dfs/dfstab.save </pre>
To reboot use:
---++ VERITAS Tips ([[VeritasConfig][Configuration]])
 
Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
  # init 6
 
Console access available from any host with SSH client installed:


most binaries are located in /opt/VRTS/bin/
  ssh clon:clon10@hallb-ts1
  password: xxxxxxx


---+++  Shutdown Vertas Cluster on this host (as root only):
Configuration:
      # hastop
---+++    on all hosts:
      # hastop -all
---+++ Cluster status:
      # hastatus -sum
---+++  Start cluster on the local node
      # hastart
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
      # cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
      # cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
      # fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
      primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01


      secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0) on clon01
  Network Interfaces ge0: 129.57.68.21 (clon10-daq1), ge1: 129.57.167.14 (clon10)


---++ Data transfer to SILO:
Local Filesystems:
---+++ Master Host
CLON01


---+++ program location
  /dev/dsk/c0t1d0s0    2GB    78%    /
/usr/local/system/raid
  /dev/dsk/c0t1d0s1    1GB    86%    /var
  /dev/dsk/c0t1d0s3    2GB    21%    /opt


---+++ program name
NFS Server:
move2silo


---+++ log files
  CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
/usr/local/system/raid/logs
  If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
  partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.


---+++ Manual start (as root from clon01):
We are using CLON10 as NFS server to export RAID partitions to all other CLONs.
<pre>      # cd /ssa
Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them.
      # mv presilo1 gopresilo
After Boot run
      # /usr/local/system/raid/move2silo >& /dev/null &
</pre> 


---+++ Manually move data from raid to SILO - example (as root from clon01):
  # sh /etc/dfs/dfstab.save </pre>


Following command can be called several times for the same partition, it is harmless.
VERITAS Tips (OBSOLETE !):
After it finished make sure that all files are on /mss/clas/g8b/data.


<pre>     # /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data
Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
</pre>
most binaries are located in /opt/VRTS/bin/
---+++  Shutdown Vertas Cluster on this host (as root only):
# hastop
---+++    on all hosts:
# hastop -all
---+++ Cluster status:
# hastatus -sum
---+++  Start cluster on the local node
# hastart
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
# cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
# cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
# fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01
secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0)  on clon01
---++ Data transfer to SILO:
---+++ Master Host
CLON01
---+++ program location
/usr/local/system/raid
---+++ program name
move2silo
---+++ log files
/usr/local/system/raid/logs
---+++ Manual start (as root from clon01):
# cd /ssa
# mv presilo1 gopresilo
# /usr/local/system/raid/move2silo >& /dev/null &
---+++ Manually move data from raid to SILO - example (as root from clon01):
Following command can be called several times for the same partition, it is harmless.
After it finished make sure that all files are on /mss/clas/g8b/data.
# /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data

Revision as of 10:24, 14 January 2007

General purpose server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM

General Information: [[1][Specifications]] [[2][System Hand Book]]

Basic tasks:

Quick shutdown (with power down):

 # init 5
   (but you better stop Veritas Cluster first using "hastop" command)

Gracefull shutdown in 2 minutes (with power down):

 # shutdown -y -i 5 -g 120

To reboot use:

 # init 6

Console access available from any host with SSH client installed:

 ssh clon:clon10@hallb-ts1
 password: xxxxxxx

Configuration:

 Network Interfaces ge0: 129.57.68.21 (clon10-daq1), ge1: 129.57.167.14 (clon10)

Local Filesystems:

 /dev/dsk/c0t1d0s0    2GB    78%    /
 /dev/dsk/c0t1d0s1    1GB    86%    /var
 /dev/dsk/c0t1d0s3    2GB    21%    /opt

NFS Server:

 CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
 If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
 partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.

We are using CLON10 as NFS server to export RAID partitions to all other CLONs. Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them. After Boot run

# sh /etc/dfs/dfstab.save

VERITAS Tips (OBSOLETE !):

Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
most binaries are located in /opt/VRTS/bin/
---+++  Shutdown Vertas Cluster on this host (as root only):
# hastop
---+++     on all hosts:
# hastop -all
---+++ Cluster status:
# hastatus -sum
---+++  Start cluster on the local node
# hastart 
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
# cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
# cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
# fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01
secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0)  on clon01
---++ Data transfer to SILO:
---+++ Master Host
CLON01
---+++ program location
/usr/local/system/raid
---+++ program name
move2silo
---+++ log files
/usr/local/system/raid/logs
---+++ Manual start (as root from clon01):
# cd /ssa
# mv presilo1 gopresilo
# /usr/local/system/raid/move2silo >& /dev/null &
---+++ Manually move data from raid to SILO - example (as root from clon01):
Following command can be called several times for the same partition, it is harmless.
After it finished make sure that all files are on /mss/clas/g8b/data.
# /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data