Clon10: Difference between revisions

From CLONWiki
Jump to navigation Jump to search
Boiarino (talk | contribs)
No edit summary
 
Boiarino (talk | contribs)
No edit summary
 
(25 intermediate revisions by 2 users not shown)
Line 1: Line 1:
---+ *CLON10*
General purpose server: Sun Netra 240, property tag F423615, s/n 0742FMA032 (boot reports s/n 76547284, probably motherboard's ..).
---++ General Information
  * [[http://www.sun.com/servers/midrange/e3500/specs.xml][Specifications]]
  * [[http://sunsolve.sun.com/handbook_pub/Systems/E3500/E3500.html][System Hand Book]]
---++ Basic tasks
---+++ Quick shutdown (with power down):
# init 5


(but you better stop Veritas Cluster first using "hastop" command)
Serial connection: 'ssh root:clon10@hallb-ts1'.
---+++ Gracefull shutdown in 2 minutes (with power down):
# shutdown -y -i 5 -g 120
---+++ to reboot use:
# init 6
---+++ Console access
available from any host with SSH client installed:
<pre>
> ssh clon:clon10@hallb-ts1
> password: xxxxxxx
</pre>
---++ Configuration


---+++ Network Interfaces
'''Installation'''
ge0: 129.57.68.21  (clon10-daq1)


ge1: 129.57.167.14 (clon10)
Network devices:
---+++ Local Filesystems
<pre>
/dev/dsk/c0t1d0s0    2GB    78%    /
/dev/dsk/c0t1d0s1    1GB    86%    /var
/dev/dsk/c0t1d0s3    2GB    21%    /opt
</pre>
---+++ NFS Server
CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.


We are using CLON10 as NFS server to export RAID partitions to all other CLONs.
bge0 - clon10
Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them.
bge1 - clon10-daq1
After Boot run<pre>
bge2 - clon10-daq2
# sh /etc/dfs/dfstab.save </pre>
bge3 - clon10-sl1
---++ VERITAS Tips ([[VeritasConfig][Configuration]])
 
Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
Install Solaris in according to [[Solaris Installation Procedure]] and [[Solaris Customization on CLON Cluster]].
 
Configure and start slave [[DNS server]] and slave [[NIS server]].
 
Install and start [[Bootp]] service.
 
Start [[Tftp]] service.
 
Install [[EtherLite32]] software.
 
Configure [[Sudo]] (22-nov-2007 Sergey copied 'sudoers' file from the old clon10, need to clean it up !).
 
Export ''/data'' and ''/raidold'' partitions using [[NFS]].
 
Start [[Nrpe]] service.
 
Install [[Procmail]].
 
 
 
== old info ==
 
General purpose server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM
 
General Information:
[[http://www.sun.com/servers/midrange/e3500/specs.xml][Specifications]]
[[http://sunsolve.sun.com/handbook_pub/Systems/E3500/E3500.html][System Hand Book]]
 
CLAS services to be started at boot time (not done yet !):
 
  rtserver
  msql daemon
 
Basic tasks:
 
Quick shutdown (with power down):
 
  # init 5
    (but you better stop Veritas Cluster first using "hastop" command)


most binaries are located in /opt/VRTS/bin/
Gracefull shutdown in 2 minutes (with power down):
  # shutdown -y -i 5 -g 120


---+++  Shutdown Vertas Cluster on this host (as root only):
To reboot use:
      # hastop
---+++    on all hosts:
      # hastop -all
---+++ Cluster status:
      # hastatus -sum
---+++  Start cluster on the local node
      # hastart
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
      # cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
      # cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
      # fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
      primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01


      secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0)  on clon01
  # init 6


---++ Data transfer to SILO:
NFS Server:
---+++ Master Host
CLON01


---+++ program location
  CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
/usr/local/system/raid
  If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
  partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.


---+++ program name
We are using CLON10 as NFS server to export RAID partitions to all other CLONs.
move2silo
Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them.
After Boot run


---+++ log files
  # sh /etc/dfs/dfstab.save </pre>
/usr/local/system/raid/logs


---+++ Manual start (as root from clon01):
<pre>      # cd /ssa
      # mv presilo1 gopresilo
      # /usr/local/system/raid/move2silo >& /dev/null &
</pre> 


---+++ Manually move data from raid to SILO - example (as root from clon01):


Following command can be called several times for the same partition, it is harmless.
VERITAS Tips (OBSOLETE !):
After it finished make sure that all files are on /mss/clas/g8b/data.


<pre>     # /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data
Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
</pre>
most binaries are located in /opt/VRTS/bin/
---+++  Shutdown Vertas Cluster on this host (as root only):
# hastop
---+++    on all hosts:
# hastop -all
---+++ Cluster status:
# hastatus -sum
---+++  Start cluster on the local node
# hastart
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
# cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
# cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
# fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01
secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0)  on clon01
---++ Data transfer to SILO:
---+++ Master Host
CLON01
---+++ program location
/usr/local/system/raid
---+++ program name
move2silo
---+++ log files
/usr/local/system/raid/logs
---+++ Manual start (as root from clon01):
# cd /ssa
# mv presilo1 gopresilo
# /usr/local/system/raid/move2silo >& /dev/null &
---+++ Manually move data from raid to SILO - example (as root from clon01):
Following command can be called several times for the same partition, it is harmless.
After it finished make sure that all files are on /mss/clas/g8b/data.
# /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data

Latest revision as of 01:16, 4 January 2009

General purpose server: Sun Netra 240, property tag F423615, s/n 0742FMA032 (boot reports s/n 76547284, probably motherboard's ..).

Serial connection: 'ssh root:clon10@hallb-ts1'.

Installation

Network devices:

bge0 - clon10
bge1 - clon10-daq1
bge2 - clon10-daq2
bge3 - clon10-sl1

Install Solaris in according to Solaris Installation Procedure and Solaris Customization on CLON Cluster.

Configure and start slave DNS server and slave NIS server.

Install and start Bootp service.

Start Tftp service.

Install EtherLite32 software.

Configure Sudo (22-nov-2007 Sergey copied 'sudoers' file from the old clon10, need to clean it up !).

Export /data and /raidold partitions using NFS.

Start Nrpe service.

Install Procmail.


old info

General purpose server: SUN Enterprise 3500, 8x400MHz Ultra SPARC-II, 4GB RAM

General Information: [[1][Specifications]] [[2][System Hand Book]]

CLAS services to be started at boot time (not done yet !):

 rtserver
 msql daemon

Basic tasks:

Quick shutdown (with power down):

 # init 5
   (but you better stop Veritas Cluster first using "hastop" command)

Gracefull shutdown in 2 minutes (with power down):

 # shutdown -y -i 5 -g 120

To reboot use:

 # init 6

NFS Server:

 CLON10 exports /scratch to clon04 to be used by event_monitor, online recsis etc. It can be used by other nodes as well.
 If that partition was not exported (for example after clon10 reboot) type 'share /mnt/scratch'. The list of exported
 partitions can be viewed by typing 'share'. To mount /scratch on clon04 type 'mount clon10:/mnt/scratch /scratch'.

We are using CLON10 as NFS server to export RAID partitions to all other CLONs. Since we are mounting RAIDs via VERITAS CFS we have to be sure that VERITAS has started before exporting them. After Boot run

# sh /etc/dfs/dfstab.save


VERITAS Tips (OBSOLETE !):

Veritas Cluster consist of two servers CLON10 and CLON01, plus shared storage: raid0-6 and scratch
most binaries are located in /opt/VRTS/bin/
---+++  Shutdown Vertas Cluster on this host (as root only):
# hastop
---+++     on all hosts:
# hastop -all
---+++ Cluster status:
# hastatus -sum
---+++  Start cluster on the local node
# hastart 
---+++ Mount disk partition on cluster: on clon01 or clon10 type:
# cfsmount /mnt/raid2 [nodename]
---+++ Unmount disk partition: on clon01 or clon10 type:
# cfsumount /mnt/raid2 [nodename]
---+++ File System Check (FSCK)
# fsck -o full -F vxfs -y /dev/vx/rdsk/raid2/v1
---+++  Cluster communication:
primary channel - (Low Latency transport - LLT, privat network) - ethernet ports hme0 (/dev/hme:0)  on clon10 and eri0 (/dev/eri:0) on clon01
secondary channel - (Low Priority  - lowpri, over public network)  - ge1 (/dev/ge:1) on clon10 and ge0 (/dev/ge:0)  on clon01
---++ Data transfer to SILO:
---+++ Master Host
CLON01
---+++ program location
/usr/local/system/raid
---+++ program name
move2silo
---+++ log files
/usr/local/system/raid/logs
---+++ Manual start (as root from clon01):
# cd /ssa
# mv presilo1 gopresilo
# /usr/local/system/raid/move2silo >& /dev/null &
---+++ Manually move data from raid to SILO - example (as root from clon01):
Following command can be called several times for the same partition, it is harmless.
After it finished make sure that all files are on /mss/clas/g8b/data.
# /usr/local/system/raid/clonput2 /mnt/raid1/stage_in/* /mss/clas/g8b/data