
4/16/15:

expoles directory:

Code in expoles/ evolves O+ from kbot to the top, independent of 
WACCM physics/chemistry. op_out and opnm_out are allocated as
module data in dpie_coupling, and updated in oplus.F90, where
output from the previous step are input to oplus_xport (see lines
300-318 in oplus.F90). O+ below kbot is not changed by this code,
it is simply updated from op input (k=1,kbot-1 (bot2top)), as 
received from physics/chemistry. Oplus output is not passed back
to physics via the tracer.

Latitude loops are globally lat=2,plat-1, i.e., excluding the poles. 
All inputs are copied locally into arrays that include halo points. 
Halos at the poles (lat=1 and plat), and at lat=plat+1 and and lat=0 
are calculated by sub mp_polelats_f3d (edyn_mpi.F90). These halos
become jp1,jp2 (in the north, when lat==plat-1), or jm1,jm2 (in the
south, when lat==2)

Plots from this code look good in the first few timesteps, but after
an hour of model time (12 5-min steps), the poles and first 2-3 lats
below the poles diverge severly from the rest of the grid. This has
also happened with other methods of dealing w/ the poles, e.g.,
averaging, extrapolation, using the pole constants set by chemistry,
seting poles and halos using mp_polelats_f3d. Plots of op_out are 
dominated by values at and just equatorward of the poles, esp in the
upper thermosphere, and getting worse with model time.

Hanli suggests turning on shapiro smoother (lat,lon, probably not
time smoother yet), then possibly adding an fft polar filter, as
in tiegcm. 

For svn info, see files svn.info and svn.diffs in expoles directory.
Tar file expoles/expoles.src.tar contains *.F90 at this time.

History files and stdout log files from a run using this code are in:
/glade/scratch/foster/fwx.edyn.cam5374/run/expoles.

!-----------------------------------------------------------------------
ToDo:
 - set kbot to constant pressure instead of height zbot.
 - How to vectorize lat loops containing jp1,jm1, etc. use options to ifort
!-----------------------------------------------------------------------

4/16/15,  8:30 pm:
Code is as above, except, I tested with spatial and time smoothing. 
Can save input values at poles, and use them in the smoothing.
Nothing seems to help the pole problem that develops in only 6 or 
so timesteps, in which output op_out becomes very large negative 
numbers. Am saving this as save/source.save001.tar. 

Will now go back to symmetry about the pole, i.e., lat=1,plev, with
halos 0,-1 over the south pole, and lat=plev+1,plev+2 over the north pole.
This will retain the input values at the pole, which are constant in
longitude.

!-----------------------------------------------------------------------
ToDo:
 - Call mp_pole_halos from end of sub mp_geo_2halos
!-----------------------------------------------------------------------

5/4/15 am (after trip to Utah):
Rewrote mp_pole_halos to use mp_gatherlons and mp_scatterlons.
Global longitude data is gathered to westernmost tasks at the
far north and south latitude task rows. Halo points for all 
tasks in the row are set by the westernmost tasks, then they
are scattered back out to the processors in the row. This appears
to have greatly improved plots of all fields, including O+ output.

However, the problem with EXPLICIT1 still exists: it goes too
large in magnitude, mostly at processor longitude boundaries near
the south pole. The north pole still looks ok. However, subsequent
modifications of the explicit array appear to be ok. Furthermore,
tiegcm is seeing similar problems with that stage of the explicit
terms, but it does not obviously affect the final O+ output.
There are some small glitches at longitude processor boundaries
at both N and S poles. So this problem remains unsolved, but am 
moving on for now, despite the small glitches at longitude boundaries
at the poles.

This code still has a lot of print statements, commented out old
code (e.g., dealing w/ the poles), but am saving it now before
cleaning it up. Also, note this code is looping 1,plat in latitude
(not 2,plat-1). AND, it is NOT passing O+ back to waccm physics,
i.e., op_out is evolving privately within the dpie coupler.
Saving save/source.save002.tar.

!-----------------------------------------------------------------------

5/4/15, 5 pm.
There is a problem with op_out vs op_solve, see following comments
after the trsolv call, at end of third lat scan:

      call trsolv(p_coeff(kbot:plev,i0:i1),q_coeff(kbot:plev,i0:i1), &
        r_coeff(kbot:plev,i0:i1),explicit(kbot:plev,i0:i1),          &
        op_solve(kbot:plev,i0:i1,lat),kbot,plev,kbot,plev,i0,i1,lat)
!
! If this line is included (i.e., attempt to update op_out), then the 
! max of both op_out and op_solve get larger with model time, up to about 
! +/- 1.e40, then it crashes soon after in i/o with numeric conversion error
! after about 48 timesteps (4 hours).
!
! If this line is NOT included, then op_solve appears to evolve properly
! (at least the max is not increasing), but op_out does not evolve at all.
!
!       op_out(kbot:plev,i0:i1,lat) = op_solve(kbot:plev,i0:i1,lat)

 300 continue
    enddo ! end third latitude scan (lat=j0,j1)

I THINK this may be happening because op input is updated from op_out
(which is NOT being updated as commented above), but op_solve is not.
If so, this would implicate that the uncontrolled increase of max magnitude
of op_out by rest of the code, i.e., increasing magnitudes are coming out
of trsolv when actually evolving (i.e., using input op = op_out at the
beginning of oplus_xport). Saving this as save/source.save003.tar. 

5/5/15:
op_solve is eliminated ('OP_SOLVE' is available from op_out after trsolv)
If I am NOT using op_out from previous step as input op to oplus_xport, then
everything looks pretty good. But if I DO use op_out from previous step
as input to current step, then most fields are overwhelmed by problems at
the poles within a few timesteps. See the following in oplus.F90:
!
! Use O+ output op_out, kbot to top, from previous step as input op, to 
! evolve op_out.  This assumes we are NOT sending updated O+ to WACCMX 
! tracer (see bottom of dpie_coupler.F90) for testing.
! This requires making op_out intent(inout)
!
      op  (kbot:plev,i0:i1,j0:j1) = op_out  (kbot:plev,i0:i1,j0:j1)
      opnm(kbot:plev,i0:i1,j0:j1) = opnm_out(kbot:plev,i0:i1,j0:j1)

With the above lines, where we are using output of the previous step as
input for the current step, the code fails described above. If the above
lines are commented, the O+ output looks ok (but its only one timestep
of transport). 

Also merged Chris Fischer's modifications to edyn_mpi.F90, including
rewritten sub mp_mag_jslot, and modifications to mp_gatherlons_f3d.
Re the latter, I had to add my +4 mods to the gatherlons routine to
accomodate mp_pole_halos. Am saving this as save/source.save004.tar.

!-----------------------------------------------------------------------
5/9/15:
This code appears to work well if op_out is NOT allowed to be input
for the next timestep. In this case, op_out does not evolve. If it is
allowed to evolve (i.e., using output from previous step as input to
the current step), then the code falls apart in only a few timesteps.
It is possibly related to longitude halo points near the poles, esp
a problem in southern hem at December solstice. First timestep looks
fine, but the bug increases in magnitude very quickly with model time.

Obviously, I have to solve the above problem, because op_out must 
evolve with model time. In this version, op_out is NOT passed back
to WACCM physics. Instead, if op_out is to evolve internal to dpie coupling,
then op_out from previous timestep is fed back into oplus_xport as O+ input.
This is when the problem occurs.

This code loops globally lat=2,plat-1, i.e., excluding the poles.
Rewrote sub setpoles (edyn_mpi.F90), and call it as needed, esp after
third latitude scan, where coefficients and explicit terms receive
values at the poles before trsolv is called. In this version, I have
added a latitude dimension to the solver coefficients and explicit terms.
This way, they can receive polar values after the lat=2,plat-1 third loop, 
and before trsolv is called. Trsolv is called in its own lat=j0,j1 loop.

Beware not to call trsolv if skipping certain parts of the third latitude 
scan while debugging with "goto 300". E.g., if skipping calculations of
p,q,r coeffs or explicit, then do not call trsolv.
 
Saving save/source.save005.tar.
!-----------------------------------------------------------------------
5/10/15:
Essentially the same as above, except that trsolv calls are commented,
and it is NOT using op_out as input to the next step. Everything looks
pretty good, including EXPLICIT1. Added diags for EXPLICIT,P_COEFF,
Q_COEFF,R_COEFF immediately before the trsolv call, and they look ok.

Maybe there's a problem in the logic to evolve op_out internally to
dpie coupling. Am considering going to passing op_out back to physics
and see what happens.

Saving save/source.save006.tar.
!-----------------------------------------------------------------------
5/13/15:
fft polar filtering is now working, using fft9.F90, which was adapted
from TIEGCM code. It may also work using fft99.F90 from cam/src/utils,
but have not tried it yet.  Both filter1 and filter2 are working.
The bad news is that op_out continues to rapidly increase in magnitude 
with model time. Will now experiment with tuning the fft filtering,
reducing timestep, longer runs, etc.

5/14/15:
Dan Marsh made a chemical pre-processor file that turns off transport
of ions, electrons, etc. I have copied his file chem_mech.in to
~/cesm/fwx.edyn.cam5374/SourceMods/src.cam, then modified env_build.xml
as follows (see Dan's email to me at 9:54 am today):

<!-- "CAM configure options, see CAM configure utility for details (char) " -->
<entry id="CAM_CONFIG_OPTS"   value="-phys cam4 -chem waccm_mozart -waccmx -usr_mech_infile $CASEROOT/SourceMods/src.cam/chem_mech.in"  />

But, this did not work - produced a runtime error. Dan says Francis will
try to reproduce the error. So for now, I have commented the additional
option in env_build.xml above before saving this source as save/source.save007.tar.
!-----------------------------------------------------------------------
5/20/15:
Basically same as 5/13 above, but now using pointers instead of allocatable
arrays to send multiple fields to mp_geo_halos and mp_polar_halos in oplus.F90. 
This avoids copying data, so should be faster and use less memory. 
As above, this code runs, does not crash, produces reasonable oplus,
but there is no sign of equatorial anomoly (EIA) in waccm op_out and explicit
terms. Saving this prior to merging with Joe's branch in which advection
is turned off. Saving save/source.save008.tar (this file is larger than
the others because I included the save directory in the tar file).
!-----------------------------------------------------------------------
save/source.save009.tar:

Not sure about this. It does not have Joe's changes to dpie_coupling
and dp_coupling.
!-----------------------------------------------------------------------
6/3/15: save/source.save010.tar

This code is not running! This was an attempt to extend the use of array_ptr_type
into mp_gatherlons_f3d, mp_scatterlons_f3d, lonshift_blocks, etc., but something
is wrong, it is crashing in sub bdzdvb (third lat scan of oplus_xport). Before the 
call hj scale height has min,max of 0.,1.e5, but on entry to bdzdvb, the dummy arg 
min,max is 0.,0. Then it crashes w/ corrupted traceback.

So I am going back to source.save009, as described above, then go more slowly
implementing the array pointer type.
!-----------------------------------------------------------------------
6/4/15: save/source.save011.tar

This code is working, using subs xxx_ptrs with array_ptrs_type (mp_gatherlons_f3d_ptrs,
mp_scatterlons_f3d_ptrs, lonshift_blocks_ptrs, setpoles_ptrs, filter1,2_op_ptrs,
savefld_waccm_switch_ptrs). Subs that do not use array_ptrs_type are still in the code, 
but are not called. (subs mp_geo_halos and mp_pole_halos were already converted
in save009 to use array_ptrs_type).

After saving this, I will remove the subs that do not use array_ptrs_type,
and rename the xxx_ptrs subs to the old names (without the _ptrs suffix). 
!-----------------------------------------------------------------------
6/4/15: save/source.save012.tar

All routines xxx_ptrs have been renamed without the _ptrs suffix, and the
old routines removed. So now all routines described above (save011.tar) are
using array_ptrs_type. Also added new routine "switch_model_format" (edyn_mpi.F90), 
which phase shifts longitudes and inverts vertical dimension for an arbitrary
number of fields. This is called twice from dpie_coupling (first to convert
from WACCM to TIEGCM before dynamo and oplus_xport, and again to convert
from TIEGCM to WACCM afterwards). It is also called from sub savefld_waccm_switch
in oplus.F90.

All fields (about 79) related to oplus_xport are still being saved to WACCM histories 
(savefld_waccm_switch calls in oplus.F90).

This code was tested at 60-second and 5-minute timesteps w/ 1 or 2-day runs.
!-----------------------------------------------------------------------
6/4/15: Made svn commit from:
  /glade/p/work/foster/cesm/waccmx_edyn_t3d_cam5_3_74/components/cam/src
  Next steps:
  - fix edynamo.nc (apparently not working for 2d mag arrays)
  - Comment out most savefld_waccm_switch calls in oplus.F90
    (i.e., save only op_out and opnm1_out)
  - Chris and Joe are working on restart capability.
!-----------------------------------------------------------------------
6/17/15: Working with nsplit, et.al., in attempt to mitigate "banding" in
O+. The following worked for 3-day solstice run:

dtime = 300   (main timestep)
nsplit = 8    (dynamics: 300/8=37.5)
nspltvrm = 4  (vertical remapping: 300/4=75)
nspltrac = 4  (tracers: 300/4=75)
nspltop = 5   (oplus_xport 300/5=60)

The above settings eliminated the banding problem.

6/18/15: Added mods from Joe for chemistry Te,Ti.

6/22/15:
Added Ne diagnostic to waccm and edynamo histories (dpie_coupling.F90),
and some other minor mods concerning saving fields to history files.

Saving save/source.save013.tar before setting up multiple edynamo_xxx.nc
files, because edynamo.nc alone gets too big. Then will make commit 
before making several longer (like 10-day) test runs.

!-----------------------------------------------------------------------
6/23/15:
Saving save/source.save014.tar. Added multiple edynamo_nnn.nc files.
This should be same as svn revision r71296. Now making 4 10-day
test runs.
!-----------------------------------------------------------------------
7/23/15:
Saved save/source.save015.tar. This code corresponds to r71920 commit
made today. The commit also includes modifications to dp_coupling.F90
and iondrag.F90 in other source dirs.
!-----------------------------------------------------------------------
7/29/15:
Commit r72012 for new solar zenith angle calculation. New sub get_zenith
replaces old sub calc_chi in oplus.F90. Sub get_zenith is called by sub
oplus_flux for O+ upper boundary.
!-----------------------------------------------------------------------
7/30/15:
Copied Dan's mods for new chemistry to include op(2d) and op(2p), see
new chem init file SourceMods/src.cam/chem_mech.in, and code mods in
cam/src/chemistry/mozart/mo_usrrxt.F90.  Also fixed save of WACCM_NE 
after oplus_xport in dpie_coupling.F90.  Saved save/source.save016.tar
even tho is only has the WACCM_NE fix.
!-----------------------------------------------------------------------
8/3/15
Added CAM interface dpie_coupling.F90 module in cam/src/physics/cam and 
a flag to dp_coupling.F90 as part of getting code to work for CAM/WACCM 
compsets when WACCM-X is turned off.  Also, added new chemistry pre-
processor code to cam/src/chemistry/pp_waccm_mozart. Added new use case
file in cam/bld/namelist_files/use_cases for configuration of WACCM-X 
edynamo/oplus compset and fixed fincl3 in WACCM-X 2000 use case file.
Did not do scripts and machines tags for new compset config files

8/7/2015
Added conditionals in a few modules and dummy cam physics interfaces to 
get branch to run for CAM/WACCM.  Also, included spatially varying 
upper boundary hydrogen flux.  Another addition is chemical potential 
heating rates for op(2d) and op(2p) reactions.  And O3P cooling and EUV
file input also included.

8/20/15:
Fixed logical flag use_dynamo_drifts (formerly use_dynamo_ionosphere)
to work: when false, empirical drifts from physics/exbdrift are used,
otherwise (use_dynamo_drifts=true) then edynamo drifts are used.
Save save/source.save017.tar.

!-----------------------------------------------------------------------
9/24/15
Added weimer05 as option for high-lat potential with the dynamo.
Now, if use_dynamo_drifts=true, the flag highlat_potential_model
can be set to either 'heelis' or 'weimer'.  The empirical potential
code in physics/chemistry still uses weimer96 (use_dynamo_drifts=false)
(it did not look easy to update that code to weimer05). Weimer or 
Heelis models are called from dpie_coupling.F90.  
I do not see the save directory here anymore, so not saving a tar file.

