ERROR Book: Difference between revisions
No edit summary |
No edit summary |
||
Line 16: | Line 16: | ||
[0] ERROR: Discard data until next control event | [0] ERROR: Discard data until next control event | ||
clondaq1:coda_eb> | clondaq1:coda_eb> | ||
After that problem stays, restarted every daq component (EB, ER, ETs, runcontrol, rcServer) - does not helped. Did daq_exit, daq_start, | |||
reboot all rocs again - not it works. | |||
* Sergey B.12-mar-2010 around 10:50am: something is breaking coda_eb/coda_er: | * Sergey B.12-mar-2010 around 10:50am: something is breaking coda_eb/coda_er: |
Revision as of 12:53, 13 March 2010
- Sergey B. 13-mar-2010: sometimes after bad crash all rocs (even scalers) must be rebooted, otherwise EB is crashing with messages:
..... case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap case 0: no swap .. done. [0] FATAL: Event (Num 1 type 1) NUMBER mismatch -- roc[11] (rocid 30) sent -1 (type 18) [0] ERROR: Discard data until next control event clondaq1:coda_eb>
After that problem stays, restarted every daq component (EB, ER, ETs, runcontrol, rcServer) - does not helped. Did daq_exit, daq_start, reboot all rocs again - not it works.
- Sergey B.12-mar-2010 around 10:50am: something is breaking coda_eb/coda_er:
..... UDP_standard_request >sta:EB1 booted< UDP_standard_request >sta:EB1 booted< UDP_standard_request >sta:EB1 booted< UDP_standard_request >sta:EB1 booted< UDP_cancel: cancel >sta:EB1 booted< codaUpdateStatus: updating request done CODA_Init 14 2 clasprod::EB1> bind on port 5001 DB update: >UPDATE process SET inuse='5001' WHERE name='EB1'< CODAtcpServer: start work thread befor: socket=5 address>129.57.71.28< port=37258 wait: coda request >< in progress wait: coda request >< in progress wait: coda request >< in progress wait: coda request >< in progress wait: coda request >< in progress wait: coda request >< in progress Executing >< (len=0) Segmentation fault (core dumped) clondaq1:coda_eb>
In the same time et_start complains (we have now ET system protected against CC scans):
et ERROR: et_netserver: ET server being probed by non-ET client or read failure et ERROR: et_netserver: ET server being probed by non-ET client or read failure
FIXED by checking for the len=0 in the message string, all other possible checks were implemented already
- Sergey B. 12-mar-2010: global reboot gives almost all PMCs (except first two) failing on MySQL access:
Done executing startup script $CODA/VXWORKS_ ppc/bootscripts/boot_pmc1 -> INFO(mysql_real_connect9): errno=0 INFO(mysql_real_connect9): OK mysql_real_connect9: error message: 2013/HY000 (Lost connection to MySQL server during query) dbConnect ERROR: mysql == NULL program Exception current instruction address: 0x00000000 Machine Status Register: 0x0008b030 Condition Register: 0x40000085 Task: 0x1e404190 "coda_pmc"
It should be mentioned that we are doing something right now; day before it was no activity and all rebooted fine.
- Sergey B. 1-sep-2009: during croctest1 reboot:
INFO(mysql_real_connect9): errno=0 INFO(mysql_real_connect9): OK mysql_real_connect9: error message: 2013/HY000 (Lost connection to MySQL server during query) dbConnect ERROR: mysql == NULL
second reboot went fine
- Sergey B. 15-june-2009: just after run was ended noticed that all runs starting from 60073 has 'No configuration!' instead of 'PROD' etc. Correcting manually. Must check if other information is correct, and understand the reason if possible .. Is it related to the database update procedure inplemented around that time ?
It appeares that begin_clasprod_xxxxxx.txt log files has that wrong starting from run 60073.
- Sergey B. 11-June-2009: swaped ec4 and sc2 and during sc2 boot see following from host:
.......................... taskDelay(sysClkRateGet()*5) Args = -session clasprod -objects sc2 ROC -i CODA_Init reached CODA_Init 1 CODA_Init 2 sc2 CODA_Init 11 CODA_Init: objectTy >(null)< CODA_Init 12 CODA_Init: use 'SESSION' as >clasprod< Tcl_AppInit CALLS !!!!! 11-11 11-12 11-13 11111111111111111 22222222222222222 0xdd0e880 (tNetTask): arptnew failed on 8139a743 value = 0 = 0x0 taskSpawn "TCP_SERVER",250,0,100000,tcpServer value = 231091088 = 0xdc62b90 proconhost value = 0 = 0x0 ......................
Second reboot shows the same:
................... 22222222222222222 0xdd0e880 (tNetTask): arptnew failed on 8139a743 >>>>>>>clon10 clasrun 2508 9998<<<<<<<<< 24 list >clon10 clasrun 2508 9998< machine name = clon10 user ID = 2508 group ID = 9998 .................
- Sergey B. around 6-June-2009: ec2pmc1 error message:
write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print: 255115 20 write_thread: about to print: 255115 20 write_thread: about to print: 255115 20 write_thread: wait= 199 send= 8 microsec per event (nev=12755) CALLING customized : machine check SRAM ERROR : SRAM Error Address : 0x00000000.f2000504 SRAM Error Data Low : 0x006805e0 SRAM Error Data High : 0x00420000 SRAM Error Parity : 0x00000004 SRAM Error Cause : 0x00000001 PCI_0 DEVICE ERROR : PCI_0 Error: Status=0x00000100 Master abort PCI Cmd=7 (MemWr) ByteEnable=0 Par=0 Error Address High: 0x00000000 Error Address Low : 0xf0d10128 CALLING generic : machine check Exception next instruction address: 0x0001f058 Machine Status Register: 0x0012b030 Condition Register: 0x40222042 Task: 0x1e3b0020 "coda_net"
reboot did not helped, have to turn pmc off and run on host only
- Sergey B. 4-June-2009: when rebooting all rocs, some cannot connect to DB, as result have following:
0x1e42a360 (coda_pmc): bb_new: 'big' buffer created (addr=0x1eba3590, 16 bufs, 3145728 size) coda_pmc: big buffer1 allocated myname >ec1pmc1< Done executing startup script $CODA/VXWORKS_ ppc/bootscripts/boot_pmc1 -> INFO(mysql_real_connect9): errno=0 INFO(mysql_real_connect9): OK program Exception current instruction address: 0x00000000 Machine Status Register: 0x0008b030 Condition Register: 0x40000085 Task: 0x1e42a360 "coda_pmc" -> tt 14cad8 vxTaskEntry +68 : coda_pmc () 1e2275c0 coda_pmc +144: mysql_query () 1e258520 mysql_query +54 : mysql_real_query () 1e249850 mysql_real_query+110: mysql_send_query () 1e249718 mysql_send_query+1d0: 0 () value = 0 = 0x0 ->
for some reason it efefcts mostly PMCs. First 10 or so PMCs booted fine, and then few shows that error, and few following booted fine again, and so on. It looks like DB does not respond right the way, and PMC exits on timeout and do not try again. Should fix that place.
- Sergey B. 15-may-2009: last night following error messages from sc2 were reportedly associated with 2 crashes (I was not on shift):
write_thread: about to print: 394603 20 write_thread: about to print: 394603 20 write_thread: wait= 176 send= 6 microsec per event (nev=19730) 0x1b1c7e00 (coda_proc): timer: 9 microsec (min=3 max=909 rms**2=11) proc_thread: wait= 176 send= 12 microsec per event (nev=14534) 0x1b1c7e00 (coda_proc): timer: 9 microsec (min=3 max=909 rms**2=19) 0x1b1c7e00 (coda_proc): timer: 9 microsec (min=3 max=909 rms**2=17) bosMgetid: ERROR: lenn=0 ???: 0x52432d32 0x38363333 0x31313534 0x00696d3d 0x25640a00 bosMgetid: ERROR: name >RC-2< does not described, update clonbanks.ddl file !!! bosMgetid: lenn=4, len=4, nddl1=116 bosMgetid: name=>RC-286331154< try again .. bosMgetid: ERROR: lenn=0 !!!: 0x52432d32 0x38363333 0x31313534 0x00696d3d 0x25640a00 bosMgetid: lenn=4, len=4 bosMgetid: ERROR: name >RC-2< does not descr ibed, update clonbanks.ddl file !!! bosMgetid: lenn=4, len=4, nddl1=116 bosMgetid: name=>RC-286331154< no way !! bosMlink: ERROR: bosMgetid returns -99
interrupt: timer: 48 microsec (min=9 max=195 rms**2=9) interrupt: timer: 45 microsec (min=9 max=195 rms**2=18) interrupt: timer: 45 microsec (min=9 max=195 rms**2=10) interrupt: timer: 48 microsec (min=9 max=195 rms**2=14) setHeartError: 0 >sys 0, mask 14< WARN: HeartBeat[0]: heartbeat=6658(6658) heartmask=14 UDP_cancel: cancel >inf:sc2 sys 0, mask 14< UDP_cancel: cancel >inf:sc2 sys 0, mask 14<
- Sergey B. 14-May-2009 5:54am: got page from clascron@clon10: 'Process monitor: missing process: alarm_server - Online problem: alarm_server did not respond to status pool request'. Found later that stadis was not updating etc. It appeares that smartsockets server was hund somehow on clondb1, for example command 'ipc_info -a clasprod' from clon10 or clon00 did not worked, messages were:
clon00:clasrun> ipc_info -a clasprod 08:08:31: TAL-SS-00088-I Connecting to project <clasprod> on <local> RTserver 08:08:31: TAL-SS-00089-I Using local protocol 08:08:31: TAL-SS-00090-I Could not connect to <local> RTserver 08:08:31: TAL-SS-00093-I Skipping starting <start_never:local> RTserver 08:08:31: TAL-SS-00088-I Connecting to project <clasprod> on <clondb1> RTserver 08:08:31: TAL-SS-00089-I Using local protocol 08:08:31: TAL-SS-00090-I Could not connect to <clondb1> RTserver 08:08:31: TAL-SS-00088-I Connecting to project <clasprod> on <clondb1> RTserver 08:08:31: TAL-SS-00089-I Using tcp protocol ^C
and it waits here. After restarting server on clondb1 by commands
/etc/init.d/smartsockets stop /etc/init.d/smartsockets start
everything came back to normal. We should think about detecting that problem and restarting server automatically, also it happens very rearly, few time a year. Probably it can be done by monitoring cpu usage: when server was hung it was using 100% of cpu:
top - 08:06:16 up 130 days, 8:59, 1 user, load average: 2.13, 2.03, 1.64 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 25.0% us, 0.1% sy, 0.0% ni, 73.0% id, 1.9% wa, 0.0% hi, 0.1% si Mem: 3995356k total, 3968880k used, 26476k free, 88168k buffers Swap: 8385920k total, 208k used, 8385712k free, 3432056k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4471 root 25 0 5908 4540 1956 R 100 0.1 300:24.06 rtserver.x 1 root 16 0 4756 556 460 S 0 0.0 0:11.25 init 2 root RT 0 0 0 0 S 0 0.0 0:03.85 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:10.23 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:01.79 migration/1 5 root 34 19 0 0 0 S 0 0.0 1:03.72 ksoftirqd/1 6 root RT 0 0 0 0 S 0 0.0 0:01.62 migration/2
while normally it uses fractions of percent. It is also will be useful to monitor cpu usage in general, few days ago clasrun's sshd on clon10 was taking almost 50% of cpu.
- Sergey B. 12-may-2009: looking for the reason of missing gate messages in all DC crates we found that front end busy cable from SF to FC was disconnected on the floor. For the future we should remember that disconnecting that cable may create missing gate messages. It is also important before every run to use 'dc_all' configuration and look at scope in the counting house to make sure front end busy signal is there.
- Sergey B. 16-feb-2009: runcontrol pop-up message while pressing 'Configure':
rcConfigure::loadRcDbaseCbk: Loading database failed !!!
On second 'Configure' click it worked.
- Sergey B. June-2008: it looks like when one SILO drive goes bad, something is happening in script's logic and it never tries to run 2 streams, only one ! One .temp link can be observed, aling with one presilo1 link which is NOT becoming another .temp. Need check !!!
- Sergey B. 7-may-2008: EB1 crashed, last messages:
EB: WARNING - resyncronization in crate controller number 5 ... fixed . EB: WARNING - resyncronization in crate controller number 6 ... fixed . EB: WARNING - resyncronization in crate controller number 6 ... fixed . ERROR: lfmt=0 bankid=0
Found that tage2 and tage3 must be rebooted
- Sergey B. 5-may-2008: polar crate turned off; turn it back on, but notice that
one of str7201 scalers has all lights on; scaler was replaced in 2 days, everything works fine
- Sergey B. 30-apr-2008: gam_server on clon04 takes 100% CPU
Found on the web that creating file '/etc/gamin/gaminrc' with following contents will help:
#configuration for gamin # Can be used to override the default behaviour. # notify filepath(s) : indicate to use kernel notification # poll filepath(s) : indicate to use polling instead # fsset fsname method poll_limit : indicate what method of notification for the filesystem # kernel - use the kernel for notification # poll - use polling for notification # none - don't use any notification # # the poll_limit is the number of seconds # that must pass before a resource is polled again. # It is optional, and if it is not present the previous # value will be used or the default. fsset nfs poll 10 # use polling on nfs mounts and poll once every 10 seconds # This will limit polling to every 10 seconds and seams to prevent it from running away
Created file, will see if it helped ..
- Sergey B. 20-apr-2008: sc2pmc1 error message (during "end run' ???)
................................................. write_thread: wait= 135 send= 28 microsec per event (nev=12351) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=4) proc_thread: wait= 130 send= 31 microsec per event (nev=14709) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=10) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=15) write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print: 252148 20 write_thread: about to print: 252148 20 write_thread: about to print: 252148 20 write_thread: wait= 135 send= 31 microsec per event (nev=12607) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=19) proc_thread: wait= 135 send= 31 microsec per event (nev=14739) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=19) 0x19c18050 (coda_proc): timer: 23 microsec (min=8 max=2161 rms**2=14) ERROR: bufout overflow - skip the rest ... bufout=485322144 hit=485387636 endofbufout=485387680 write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print write_thread: about to print: 255564 20 write_thread: about to print: 255564 20 write_thread: about to print: 255564 20 write_thread: wait= 128 send= 42 microsec per event (nev=12778) proc_thread: wait= 371 send= 31 microsec per event (nev=6291) ROC # 22 Event # 0 : Bad Block Read signature 0x00000140 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x0150107F -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x015810DC -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x02B80FD6 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x02A010BE -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ..................................................
At the same time at sc2:
...................................... interrupt: timer: 33 microsec (min=6 max=401 rms**2=14) interrupt: timer: 33 microsec (min=6 max=401 rms**2=4) interrupt: timer: 33 microsec (min=6 max=401 rms**2=17) interrupt: timer: 33 microsec (min=6 max=401 rms**2=1) interrupt: timer: 33 microsec (min=6 max=401 rms**2=17) interrupt: timer: 33 microsec (min=6 max=401 rms**2=0) interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums ................................ interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums logTask: 220 log messages lost. interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums logTask: 692 log messages lost. interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums logTask: 544 log messages lost. .................................. interrupt: WARN: [ 0] slotnums=1, tdcslot=16 -> use slotnums logTask: 562 log messages lost. interrupt: SYNC: ERROR: [ 0] slot=16 error_flag=1 - clear logTask: 707 log messages lost. interrupt: SYNC: ERROR: [ 3] slot=19 error_flag=1 - clear logTask: 517 log messages lost. interrupt: SYNC: ERROR: [ 4] slot=20 error_flag=1 - clear logTask: 620 log messages lost. interrupt: SYNC: ERROR: [ 5] slot=21 error_flag=1 - clear logTask: 559 log messages lost. interrupt: SYNC: scan_flag=0x00390000 logTask: 512 log messages lost. interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums logTask: 671 log messages lost. ...................................... logTask: 563 log messages lost. interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums logTask: 385 log messages lost. interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums wait: coda request in progress codaExecute reached, message >end<, len=3 codaExecute: 'end' transition wait: coda request in progress wait: coda request in progress wait: coda request in progress wait: coda request in progress wait: coda request in progress ....................................... wait: coda request in progress wait: coda request in progress interrupt: WARN: [ 0] slotnums=0, tdcslot=16 -> use slotnums interrupt: TRIGGER ERROR: no pool buffer available wait: coda request in progress wait: coda request in progress .....................................
Note: sc2 never warn about wrong slot number before that moment.
FIX: it seems there is a bug in 1190/1290-related rols and library: NBOARDS set to 21 and several arrays allocated with that length, but actual board maximum number is 21; NBOARDS was set to 22 everywhere, will test next time
- Sergey B. 31-mar-2008: CC scans CODA !!! messages from EB:
wait: coda request in progress wait: coda request in progress wait: coda request in progress wait: coda request in progress wait: coda request in progress wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=59063 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=60187 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=60295 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=60418 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=60823 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=32961 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=33122 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=33795 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=34251 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=34529 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=35811 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=36134 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=41569 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=53913 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=57701 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=33357 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=36136 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=39509 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=39900 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=45174 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=45909 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=34521 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=49023 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=49526 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=53555 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=55694 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=56833 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=58251 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=59023 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=33687 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=35731 wait: coda request in progress Error(old rc): nRead=0, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=51581 wait: coda request in progress wait: coda request in progress Error(old rc): nRead=18, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=41371 wait: coda request in progress Error(old rc): nRead=6, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=43198 wait: coda request in progress Error(old rc): nRead=10, must be 1032 CODAtcpServer: start work thread befor: socket=5 address>129.57.71.38< port=55260 wait: coda request in progress Error(old rc): nRead=0, must be 1032 clasprod::EB1> ^C
Similar messages were coming from ER.
ET reported following:
TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure
and another ET:
....... et_start: pthread_create(0x0000000d,...) done TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure TCP server got a connection so spawn thread et ERROR: et_client_thread: read failure et INFO: et_sys_heartmonitor, kill bad process (2,3063) et INFO: et_sys_heartmonitor, cleanup process 2 et INFO: set_fix_nprocs, change # of ET processes from 2 to 2 et INFO: set_fix_natts, station GRAND_CENTRAL has 0 attachments et INFO: set_fix_natts, station LEVEL3 has 1 attachments et INFO: set_fix_natts, station ET2ER has 1 attachments et INFO: set_fix_natts, # total attachments 2 -> 2 et INFO: set_fix_natts, proc 0 has 1 attachments et INFO: set_fix_natts, proc 1 has 1 attachments
Also noticed that CODAs (production and test setup) do not like each other: reported following during 'configure':
Query test_ts2 table failed: Error reading table'test_ts2' definition ec3 ec3
If one runcontrol is restarted, then it works, but another one complains !!!
- Sergey B. 31-mar-2008: 'run_log_comment.tcl' and 'rlComment' cannot recognize xml comments when extraction level1 trigger file name from <l1trig> tag, it seems using first line after <l1trig> tag without looking into "" comment sign.
- Sergey B. 10-mar-2008: seems found error in run control: Xui/src.s/rcMenuWindow.cc parameters
XmNpaneMinimum and XmNpaneMaximum were both set to 480, as result run control gui area above log messages window was not big enough; set to 100 and 900 respectively, will ask Jie
- 23-jan-2008: ER3 crashed several times during last 3 weeks, mostly (only ?) during 'End' transition; todays core file:
(dbx) where [1] 0xfe9e4c20(0x8068f9d), at 0xfe9e4c20 [2] codaExecute(0xce4fdbc0, 0xce4fdbc0, 0x1, 0x8068c39), at 0x8068f9d [3] CODAtcpServerWorkTask(0x811fa00, 0x0, 0x0, 0xce4fdff8, 0xfea60020, 0xfe591400), at 0x8068d3a [4] 0xfea5fd36(0xfe591400, 0x0, 0x0, ), at 0xfea5fd36 [5] 0xfea60020(), at 0xfea60020 (dbx)
- 14-nov-2007: first week of running G9A: crashes observed in ec1 (twice), tage3, scaler1, clastrig2, EB; no feather details were obtained so far
- Sergey B. 3-nov-2007: after about 26Mevents during the run sc2pmc1 started to print following:
ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000040 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000080 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x03C80ADD -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x8A90097F -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00000060 -> resyncronize !!! ROC # 22 Event # 0 : Bad Block Read signature 0x00680BD9 -> resyncronize !!!
end run failed. Reboot sc2. During end transition ec2 froze with message:
interrupt: timer: 32 microsec (min=19 max=86 rms**2=18) 0x1a05fdf0 (twork0005): sfiUserEnd: INFO: Last Event 26723663, status=0 (0x1ca648c8 0x1ca648c0) 0x1a05fdf0 (twork0005): data: 0x00000003 0x0007014f 0x00120000 0x00000000 0xc8009181 0xc0001181 0x1a05fdf0 (twork0005): jw1 : 0x00000000 0x0197c54f 0x00000003 0x0007014f 0x00120000 0x00000000 0x1a05fdf0 (twork0005): Last DMA status = 0x200000b count=11 blen=11 0x1a05fdf0 (twork0006): sfiUserEnd: ERROR: Last Transfer Event NUMBER 26723663, status = 0x1a000 (0x90001181 0x88001181 0x80009181 0x78001181) 0x1a05fdf0 (twork0006): SFI_SEQ_ERR: Sequencer not Enabled
Reboot ec2. Started new run 55463, everything looks normal.
- Sergey B. 2-nov-2007: during the run ec2 started to print on tsconnect screen:
Unknown error errno=65 Unknown error errno=65 Unknown error errno=65 Unknown error errno=65 Unknown error errno=65
data taking continues, but runcontrol printed message:
WARN : ec2 has not reported status for 1516 seconds ERROR : ec2 is in state disconnected should be active
ec2pmc1 looked fine; end run failed, need to reboot ec2