To:Charles Flagg, BNL

From:John Gunn, ESR

Subject:SBI HLY-02-03 ADCP and CTD data

Date:October 4, 2002



The following is a discussion of the problems I encountered and solutions I implemented on the SBI cruise (HLY-02-03). Some of these I've already discussed with you. Hopefully this will help us all in processing the data and preparing for future cruises.

First in general terms, I had more problems with the BB150 than the OS75 but both had difficulties. In no particular order:



The AutoADCP routine never ran properly on the BB150 PC but ran on every other machine I tested it on.

I haven't got a clue why, except the BB150 PC ran under Win98. It had limited resources but I would think plenty to deal with this program. In starting the program, as soon as I loaded the CNT file the program ended with an error. The error window said "Run-time error '-2147024882 (80070000e)': Out of Memory". When I clicked the OK button in the window the program window closed, not allowing further diagnostics. The solution I chose for this was not to run AutoADCP on the BB150 PC.

There was also a program termination problem, although less severe, on the OS75 PC. The AutoADCP program on that PC would hang, usually after 24-36 hours of running. The error box was different. It said:

"Run Time Error 5. Invalid procedure call or argument." I watched this for a while to see if I could discern a pattern but didn't see any. VMDAS seemed to run fine through all this. Eventually, I only ran AutoADCP on this PC when we were near the boundary of one of the regions, letting the software choose when to change the INI file.

Python script:

I had some difficulties with this initially. There was a parentheses left out of one line, giving a format error, and I had to change this and some settings on my Matlab path and then it worked fine.


Data Processing

Some of the difficulties I had stemmed from differences between CODAS3 for DAS248 and the version for VMDAS. There are differences and I just had to work through my unfamiliarity. However, there were problems with the GPS nav files, which caused problems. Since I was unfamiliar with the scripts and other software it was time consuming to figure out the problems.

One thing I noticed was a daily spike in the time code for the nav data. This arose from differences between the $PADCP and $GPPAT records in the N2R file. The $PADCP record increments the day about 30 seconds before the $GPPAT record. Since the date is recovered from the $PADCP line and the time from the $ GPPAT line, the date changes before midnight and there are thus a few points that are marked with time from the next day at the end of each day. These spikes are obvious by plotting a first difference of the time code in the n2r_ash.mat file in the load subdirectory. I'm not sure if these spikes are a major problem but they add to the confusion.

The Ashtech heading data had problems during two periods, once from 21-25 July and the second from 17-19 August. During the first period the average heading data was intermittent and this caused problems with the processing script; it just didn't complete the processing, ending when one of the intermediate files contained no data. I figured out that the script that looks for the heading data, gen_ang_file.m was losing its way because of the gaps. It was only looking at gaps of 1000 data points and that wasn't long enough in this case. I modified it to look at more and more data until it found the right time period. This 1000-point limit seemed to be there just to speed up processing so it didn't seem to matter if I increased it when there were gaps. Subsequently I've examined the nav files (N2R) and there was a significant increase in the noise in the heading data from the Ashtech during this period. Although there appears to be good heading data there it the software must reject it for some reason because of the increased noise. It might be possible to recover the clean data by pre-processing it.

The second gap in Ashtech heading data occurred because the Ashtech data stream stopped. At the time,

VMDAS opened a message window on the BB150 PC when this happened so we suspected a problem with that PC. We switched the BB150 PC to p-code input as a short-term solution. The OS75 display seemed OK but, in reality, wasn't getting the Ashtech data either. We didn't realize this since that PC always had a message window open due to buffer overflows and the messages concerning the lack of nav data were buried in the other text and not obvious. Subsequently, we realized that the problem was not just with the BB150 PC but that the Ashtech data stream had been turned off. By the time we figured this out the Ashtech data stream was back on again and we eventually switched it back. Thus the buffer over flow messages masked the actual problem with the Ashtech resulting in a gap longer than it should have been. A solution would be to make some arrangement on the ship so the Ashtech would not be turned off, at least not without consultation with the scientific party.


I never figured out how to properly increment the data set with a day or two worth's of files. Some of my attempts to do this resulted in multiple passes of the data stored in the database and other problems where the data processing hung or was confused. I ultimately just started over from the beginning every time I added data, which of course, was less frequent then, because of the work involved. A related annoyance was that every time the number of depth bins changed the user has to input the start and stop time for that particular setup. Since this was common, especially near a region boundary, there were many of these segments. The OS75 had about 20 segments even after ignoring the ones of short duration. Since the information for these start and stop times was listed in the scan file output it would be better if the script just went to that file and figured the start and stop times for consistent bin segments on its own.

Alternatively, the number of bins could be set to be the same for all regions, eliminating the need to monitor it so closely.


I made changes in various scripts to deal with problems I encountered or "improvements" desired. I've

listed them below and there are additional comments in the scripts themselves to highlight changes made.

Gen_ang_file. m_

Edited to increase the number of points examined for inclusion in the average heading calculation.


An attempt to fix time code spikes in the heading data - not effective.

Plt_time_vel_contours. m

Inserted section to get rid of spikes and NaN's in time code and lat and long.

Plt_vel_sections. m

Created INI files for different sections for various settable parameters. File is edited to select section to


Insert section to clean up time code.

Some plotting parameters set as variables and included in INI file.


For the OS751 had some problems with array lengths not matching up so I put some checks in to correct

this. Actual reason for mismatch should be investigated.

Vector _g rid. m

Uses quiver.m to plot an averaged velocity field.


Uses median values to selectively despike data

Day_plot_loop. m

Plots daily cruise track.

CTD Software

I also made changes to the CTD and bottle data scripts you had written. I've included comments on those

for ease in future use.

Read_bttle_txt_files. m

Changed so number of columns in input file would be 17 (1 greater). Text of bottle data text file was



Modified to accommodate extra column in input file.

ID in plot label moved to top and made a variable

Seventh plot added (TS plot)

Some plot limits changed and edits on labels

Date and time written in lower right comer of each plot

Read_cnv_/iles. m

Some plotting parameters made into variable and stored at top of file for easy editing


Some plotting parameters made into variables and stored at top of file for easy editing

Plotting symbols made to be consistent with bottle data plots

Some plot axis limits changed

Editable text inserted at top of file to choose stations wanted for plot

Date and time written in lower right comer of each plot

I think this mentions everything of significance. You can find a detailed list of differences by comparing

the files with FC and if there are any questions I'll do my best to explain them.

Data files

There are four DVD's of the raw and processed ADCP data. There is one CD containing copies of the

CTD processing directories, including the processed data and scripts. The ADCP raw data consists of the

files written by 8/23/2002 03:38pm. This corresponds to a location southeast of the Bering Strait, before

our arrival at Nome, but well out of the study area. There is a little more data on the DAT tape that Sean

gave me but I don't think it's worth the trouble to recover it. If you'd like, I can send you the tape. The

disks are:


Disk 1: DVD format; BB150 Raw VMADCP files except N1R nav files

Disk 2: DVD format; BB150 raw N1R nav files; CODAS3 processed data set and subdirectories

Disk 3: DVD format; OS75 Raw VMADCP files except N1R and N2R nav files

Disk 4: DVD format; OS75 raw N1R and N2R nav files; CODAS3 processed data set and subdirectories

Disk 5: CD format; CTD data, processing scripts and cruise-processed graphics


Appendix: VMADCP log from SBI Summer cruise (HLY-02-03)


Setup of BNL processing of ADCP data

Bnl_sta_newday_py had a syntax error on line 1098; an extra parentheses.

Running the script it starts to scan the data files and fails.

Path created by scan\startup.m is incorrect for three directories; misc, codas3; rawadcp

         -problem was the matpath variable was incorrect. I edited it and it proceeded OK.

In plotting cruise track during load matlab fails looking for \matlab6pl\toolbox\opnml_matlab5\basics.


got Charlie to email me a copy of directory

put directory in above location and things ran well.

Needed to edit *.agx file. Just deleted records with NaN's in them.

Auto editing routine failed.



Changed PC clock on BB150 PC to agree with OS75 PC-it was 1 hour fast.

Restarted BB150 PC

0024 restarted BB150 VMDAS


1530 (approx)

OS75 had hung since -1130

Restarted with AutoADCP; BB150 OK


Checked INI files for BB150 and OS75. OS75 was using Chukchi.ini so changed BB150 manually.

Ran system monitor on BB150 PC to see if memory problem was obvious. Memory was pegged at

122MB. It didn't change when I started up AutoADCP even though it failed on a memory error.



Both ADCP's had hung up at some earlier time. Drives on server were not accessible. Needed to power

cycle the hub (top of blue cabinet) to restore connection. Restored at 0836.



Restarted OS75. Nav window displayed which seemed unusual so I did a restart. Buffer problem message

was displayed periodically but too fast to read.




Scan ran Norton utilities on BB150 PC earlier so it was down for a while. No change in AutoADCP



Noted BB150 PC clock was off by 1 hour - reset it.



Tried Charlie's latest version of AutoADCP (III) - still crashes.


1300 (approx)

changed BB150 to Arctic.ini



tried arctic.ini file with OS75.1 get much deeper penetration with this ini file.


Changed ini file back to chukchi.ini. Should still be in chukchi until 71.95 N


Both PC's hung due to problem with network hub.


Restarted AutoADCP on OS75. Had to reboot BB150 PC so it took a little longer.


Changed ini file on BB150 PC since it had changed (automatically) on OS75 PC.



ADCP's have been running with no problems for last few days.



Discovered BB150 had run out of disk space on PC. Found some old files and deleted them to open up

16MB and restarted.

1600 Disk filled up again so moved more files. Sean deleted a lot more. Didn't have to keep file since this

isn't primary storage (server disk is).