FILENAME: NF1704_os150bb_proc.txt BTE - RESTORE 2017 - NOAA Ship NANCY FOSTER CODAS3 OS150 BROADBAND PROCESSING PROJECT CRUISE ID: NF-17-04 NOAA SHIP CRUISE ID: NF-17-04 ================================================================ ================================================================ fill in the following from cruise_info.txt once it has been created (after step #13): LAST CHANGED : 2017/11/27 CRUISE NAME(S) : NF1704_os150 CRUISE DATES : 2017/05/07 13:58:00.03 to 2017/06/02 15:03:25.11 SHIP NAME : NANCY FOSTER DATABASE NAME : NF1704 DATA FILES : nf2017_126_49979.raw to nf2017_152_50400.raw INSTRUMENT : os150 ACQUISITION : PROGRAM : uhdas PROCESSING: : python LOGGING : PARAMETERS : BT : bottom track mode (on or off) SI : sampling interval or averaging period for ensemble (sec) NB : number of bins BL : bin length (m) TD : transducer depth (m) BK : blanking length (m) HO : heading offset applied by DAS (deg) HB : heading bias (deg) CRPH : compensation for roll-pitch-heading, 1:on, 0:off) yy/mm/dd hh:mm:ss BT SI NB BL TD BK HO HB CRPH 2017/05/07 13:58:00 off 300 50 2 3 4 44.30 0.00 0001 2017/05/10 20:57:50 off 300 50 2 3 4 44.30 0.00 0001 2017/05/19 04:57:52 off 300 50 2 3 4 44.30 0.00 0001 HEADING : PRIMARY : heading from gyro CORRECTION : heading correction from posmv NOTE: time-dependent heading corrections applied IN the ensembles (see cal/rotate/ens_hcorr.ang) POSITIONS : gps positions from gpsnav CALIBRATION : (check original processing parameters) additional rotation 0 final transducer angle is: (original transducer angle) - (rotate_angle) applied scale factor 1 additional scale factor (none) --- processing parameters ---------- ## (determined from "sonar"): model = os ## (determined from "sonar"): frequency = 150 ## (determined from "sonar"): instname = os150 ## (determined from "sonar"): pingtype = bb badbeam None beamangle 30 configtype python cruisename NF1704_os150 datatype uhdas dbname NF1704 ens_len 300 fixfile NF1704.gps hcorr_inst posmv pgmin 50 pingpref None proc_engine python ref_method refsm refuv_smoothwin 3 refuv_source nav sonar os150bb txy_file NF1704.gps xducer_dx 0 xducer_dy 0 yearbase 2017 ================================================================ ================================================================ UHDAS PROCESSING USING CODAS3 (ALL PYTHON UBUNTU LINUX VERSION) REPROCESSING ALL SINGLE-PING DATA FROM SCRATCH... Processed by Ryan Smith. November 27, 2017 ================================================================ (1) Prior to processing, create a directory on my mac (outside of the virtual machine) at: /home/rsmith/sadcp/PROJECTCRUISEID/at_sea_data/ In our case, this would be: /home/rsmith/sadcp/NF1704/at_sea_data/ (2) Now copy the directory structure from the cruise data DVD, or other location (perhaps out on phodnet cruise dat) to this folder... e.g.: NF1704 at_sea_data gbin proc raw rbin (3) When this copy is complete, enter the virtual machine and open a terminal window, navigate to SADCP. This is a soft link pointing to /home/rsmith/sadcp/ ==== UHDAS processing from scratch, using python ============== (4) From here, to make sure that all ownerships are compatible with the codas software type: $ sudo chmod -R 777 PROJECTCRUISEID so in our case that would be: $ sudo chmod -R 777 NF1704 (5) Now go into that directory and create a new directory called "processing": $ cd NF1704 $ mkdir processing (6) Now go into that directory and create a new directory called "config", and go into that directory: $ cd processing $ mkdir config $ cd config (7) Run the following command: $ uhdas_proc_gen.py -s nf This command creates a command file called "nf_proc.py" with details relevant to the specific ship (in our case the NANCY FOSTER). (8) Now rename this file (using the move command: "mv") to include the cruise/project specific id and the instrument. In our case, this would be: $ mv nf_proc.py NF1704_os150_proc.py (9) Edit this file (using nano, pico, or gedit, etc) and uncomment the cruise specific lines in the header (i.e. remove the "#" from the beginning of the lines). After uncommenting them, update each line with the relevant information specific to this cruise: NOTE: "cruisename" parameter must match the prefix on the ...proc.py file located in the config directory: (e.g. filename = "NF1704_os150_proc.py", therefore use "NF1704_os150") yearbase = 2017 # usually year of first data logged cruisename = 'NF1704_os150' # specify cruise name and instrument uhdas_dir = '/home/rsmith/sadcp/NF1704/at_sea_data/NF1704_merged/' # path to uhdas data directory shipname = 'NOAA Ship NANCY FOSTER' # for documentation cruiseid = 'NF1704' # for titles (10) Now go back to the "processing" folder and create the directory structure needed for the data you are going to process using adcptree.py command. In our case, using narrowband OS150 data collected using UHDAS. $ cd .. $ adcptree.py os150bb --datatype uhdas --cruisename NF1704_os150 (11) Confirm that NF1704_os150_proc.py was added to the following directory: $ cd /home/rsmith/sadcp/NF1704/processing/os150bb/config $ ls (12) Go back to the "os150bb" folder and create a quick_adcp.py control file by using the "Cat << EOF > q_py.cnt" format below. Modify the following text and then cut and paste it into the terminal window... (starting from the word "cat") $ cd .. $ cat << EOF > q_py.cnt ####----- begin q_py.cnt------------ ## all lines after the first "#" sign are ignored --yearbase 2017 --cruisename NF1704_os150 # used to identify configuration files # *must* match prefix of files in config dir ## --update_gbin ## NOTE: You must remake gbins for python processing if ## - you are not sure ## - if parameters for averaging changed ## - various other reasons. ## ==> just do it, and put them somewhere else ## could do this: ## mv ../../at_sea_data/gbin/ ../../at_sea_data/gbin.origmat ## and NOT use the py_gbindirbase argument ## --or-- could put the py gbins in a new name --py_gbindirbase ../../at_sea_data/gbin_py # (will put them locally) --configtype python ## use config/NF1704_os150_proc.py --sonar os150bb --dbname NF1704 --datatype uhdas --ens_len 300 --ping_headcorr ## applies heading correction. ## settings found in config files ## values stored in cal/rotate/ens_hcorr.* --proc_engine python ## use python to do calculations --max_search_depth 500 ## dont look deeper than 500m --auto ## be speedy ####----- end q_py.cnt------------ EOF Now you should see a file called q_py.cnt in your directory. Verify that it looks correct by editing it. (13) Now make your first pass with quick_adcp.py $ quick_adcp.py --cntfile q_py.cnt NOTE: This step creates cruise_info.txt. Cut and paste from that file to fill in the header info in the beginning of this file... (14) Now take a look at all of the .png figures using "figview.py". $ figview.py --type png . #### Errors in processing when trying step 15 below! Spoke with Jules. ## Email is as follows: ## The reason is buried in the POSMV QC cutoff. ## ## The config/proc_cfg.py file for that cruise has ## acc_heading_cutoff = 0.028 #baseline change to 0.026 in 2017/04 (now v.5) ## ## but before the number was 0.02 (they did something with their posmv, ## probably the IMU, and the heading accuracy increased). We found that ## we had to increase that number for a bunch of ships as they ## changed their POSMV ## ## Note in your cal/rotate/ens*png all the heading corrections are zero because ## the posmv was declared bad. ## ## Change the line in NF1704_os150_proc.py to say ## acc_heading_cutoff = 0.028 #baseline change to 0.026 in 2017/04 (now v.5) #### Now proceeding following fix... RHS 16NOV2017 #### Must redo the processing including the gbins... (15) If there are signifigant spikes in hcorr*.png files use the following method to remove them. Otherwise go to step (16). Ask Ryan for more details. (A) Go to /home/rsmith/sadcp/NF1704/processing/os150bb/cal/rotate/ $ cd cal/rotate (B) Fix the spikes using "patch_hcorr.py" $ patch_hcorr.py Once in the program, put a check mark next to the "cleaner" option, then click "draw/update" to see the update. You should see that some points were excluded. If you are happy, click "write/exit", if you are unhappy, find Ryan. #### NOTE! FOR THIS CRUISE I USED: #### 'cleaner' (checked) #### median filter cutoff: 0.2 (default is 3.0) #### median filter halfwidth: 8 (default is 31) (C) Inspect corrected hcorr.png files: $ figview.py newhcorr*.png You should see gaps filled with red "+" symbols, and major spikes should be gone. (D) Apply the changes by issuing the following commands: $ rotate unrotate.tmp $ rotate rotate_fixed.tmp $ cd ../.. $ quick_adcp.py --steps2rerun navsteps:calib --auto (16) Now look at the watertrack heading calibration, and bottom track heading calibration if available: from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/watertrk/adcpcal.out Number of edited points: 78 out of 90 median mean std amplitude 1.0025 1.0018 0.0084 phase -0.1710 -0.1828 0.5351 from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/botmtrk/btcaluv.out no file generated... no bottom track data collected on this cruise... We will apply the following rotations... I will go with the median values for bottomtrack: amplitude: 1.0025 phase: -0.1710 (17) Create the control file "qrot_py.cnt" by cutting and pasting the following at the command prompt in the "os150bb" folder (starting with "cat"). Note the amplitude and phase corrections in the file... $ cd .. $ cd .. $ cat << EOF > qrot_py.cnt ####----- begin qrot_py.cnt------------ ## all lines after the first "#" sign are ignored --yearbase 2017 --cruisename NF1704_os150 # used to identify configuration files # *must* match prefix of files in config dir --steps2rerun rotate:navsteps:calib ## which steps to rerun --rotate_amplitude 1.0025 ## amplitude rotation --rotate_angle -0.1710 ## phase rotation --auto ## be speedy ####----- end qrot_py.cnt------------ EOF (18) Now rerun "quick_adcp.py" with the newly created control file to apply the rotation: $ quick_adcp.py --cntfile qrot_py.cnt (19) Now look back at the new watertrack heading calibration, and bottom track heading calibration if available, and add it below: from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/watertrk/adcpcal.out Number of edited points: 78 out of 90 median mean std amplitude 1.0000 0.9994 0.0082 phase -0.0025 -0.0127 0.5362 from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/botmtrk/btcaluv.out no file generated... no bottom track data collected on this cruise... (20) Finally it is time to manually edit the data using "gautoedit.py": $ cd edit $ gautoedit.py -n5 Manually edit data points and profiles using the gautoedit tool, making sure to "Apply Manual Editing" on each screen once you have cleaned all of the data, prior to clicking the forward arrow to advance to the next time chunk. If you have trouble with gautoedit, ask Ryan for a tutorial. (21) Once manual editing of velocity profiles is completed using "gautoedit.py". Create another control file (qedit_py.cnt): $ cd .. $ cat << EOF > qedit_py.cnt ####----- begin qedit_py.cnt------------ ## all lines after the first "#" sign are ignored --yearbase 2017 --cruisename NF1704_os150 # used to identify configuration files # *must* match prefix of files in config dir --steps2rerun apply_edit:navsteps:calib ## which steps to rerun --auto ## be speedy ####----- end qedit_py.cnt------------ EOF (22) Now apply your edits: $ quick_adcp.py --cntfile qedit_py.cnt (23) Now look once more at the heading correction information and add the information to the lines below: from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/watertrk/adcpcal.out Number of edited points: 77 out of 86 median mean std amplitude 1.0000 0.9992 0.0081 phase 0.0160 -0.0062 0.5389 from: /home/rsmith/sadcp/NF1704/processing/os150bb/cal/botmtrk/btcaluv.out no file generated... no bottom track data collected on this cruise... Okay, I will leave the rotation as is... ################################################################################ REMEMBER: 1.0 degree error in heading is approximately 10 cm/s error in velocity. 0.1 degree error in heading is approximately 1.0 cm/s error in velocity. ################################################################################ (24) Now generate some web plots of the data: $ quick_web.py --interactive NOTE: if you have previously defined the sections (doing the same step earlier), you can do the following to save some time... $ mkdir webpy $ cp ../os150bb/webpy/sectinfo.txt webpy $ quick_web.py --redo Also, you can change the velocity reference layer for the plots and the vector length scale. Use "--help" to find out more. This GUI can take some getting used to. Play with it a little and then ask Ryan any questions. (25) Now generate additional data formats (matfiles and netcdf): $ quick_adcp.py --steps2rerun matfiles --auto NOTE NEW NETCDF GENERATION FORMAT BELOW... $ adcp_nc.py /home/rsmith/sadcp/NF1704/processing/os150bb/adcpdb os150bb NF1704_os150 os150bb (26) Now generate our needed output files for subsequent processing using "adcpsect.py". $ cd contour $ adcpsect.py I will use the following: vertical grid increment start at: 0 vertical increment: 2 number of points: 61 extract data by time grid (mins): 5 number of bins: 100 outfiles prefix: NF1704_os150bb time range: 2017/05/07 00:00:00.00 to 2017/06/03 00:00:00.00 (force manually) 27) YOU ARE FINISHED!!! Well, at least with the CODAS virtual machine. You now need to go back to your host machine and copy (via unix, winxp, or mac osx) your data back to its proper place on phodnet. You also need to do the following to prep the data for LADCP data processing... Run adp_in in matlab (make sure you are in the contour folder) adp = adp_in('NF1704_os150bb_uv','NF1704_os150bb_xy','NF1704_os150bb'); this creates NF1704_os150bb_adp.mat. Copy this to the ladcp dir where you are processing... edit mat file and change adp.lon into degrees west (i.e. adp.lon = adp.lon - 360 ) make sure you save the matfile in a -v6 format: eg. save NF1704_os150bb_adp.mat -mat -v6 FINISHED PROCESSING! RHS 27NOV2017