gmtmath

Reverse Polish Notation (RPN) calculator for data tables

Synopsis

gmt math [ -At_f(t)[+e][+r][+s|w] ] [ -Ccols ] [ -Eeigen ] [ -I ] [ -Nn_col[/t_col] ] [ -Q ] [ -S[f|l] ] [ -T[min/max/inc[+b|i|l|n]|file|list] ] [ -V[level] ] [ -bbinary ] [ -dnodata ] [ -eregexp ] [ -fflags ] [ -ggaps ] [ -hheaders ] [ -iflags ] [ -oflags ] [ -qflags ] [ -sflags ] [ -wflags ] [ --PAR=value ] operand [ operand ] OPERATOR [ operand ] OPERATOR= [ outfile ]

Note: No space is allowed between the option flag and the associated arguments.

Description

math will perform operations like add, subtract, multiply, and numerous other operands on one or more table data files or constants using Reverse Polish Notation (RPN) syntax. Arbitrarily complicated expressions may therefore be evaluated; the final result is written to an output file [or standard output]. Data operations are element-by-element, not matrix manipulations (except where noted). Some operators only require one operand (see below). If no data tables are used in the expression then options -T, -N can be set (and optionally -bo to indicate the data type for binary tables). If STDIN is given, the standard input will be read and placed on the stack as if a file with that content had been given on the command line. By default, all columns except the “time” column are operated on, but this can be changed (see -C). Complicated or frequently occurring expressions may be coded as a macro for future use or stored and recalled via named memory locations.

Required Arguments

operand

If operand can be opened as a file it will be read as an ASCII (or binary, see -bi) table data file. If not a file, it is interpreted as a numerical constant or a special symbol (see below). The special argument STDIN means that stdin will be read and placed on the stack; STDIN can appear more than once if necessary.

outfile

The name of a table data file that will hold the final result. If not given then the output is sent to stdout.

Optional Arguments

-At_f(t)[+e][+r][+s|w]

Requires -N and will partially initialize a table with values from the given file t_f(t) containing t and f(t) only. The t is placed in column t_col while f(t) goes into column n_col - 1 (see -N). Append +r to only place f(t) and leave the left hand side of the matrix equation alone. If used with operators LSQFIT and SVDFIT you can optionally append the modifier +e which will instead evaluate the solution and write a data set with four columns: t, f(t), the model solution at t, and the the residuals at t, respectively [Default writes one column with model coefficients]. Append +w if t_f(t has a third column with weights, or append +s if t_f(t) has a third column with 1-sigma. In those two cases we find the weighted solution. The weights (or sigmas) will be output as the last column when +e is in effect.

-Ccols

Select the columns that will be operated on until next occurrence of -C. List columns separated by commas; ranges like 1,3-5,7 are allowed, plus -Cx can be used for -C0 and -Cy can be used for -C1. -C (no arguments) resets the default action of using all columns except time column (see -N). -Ca selects all columns, including time column, while -Cr reverses (toggles) the current choices. When -C is in effect it also controls which columns from a file will be placed on the stack.

-Eeigen

Sets the minimum eigenvalue used by operators LSQFIT and SVDFIT [1e-7]. Smaller eigenvalues are set to zero and will not be considered in the solution.

-I

Reverses the output row sequence from ascending time to descending [ascending].

-Nn_col[/t_col]

Select the number of columns and optionally the column number that contains the “time” variable [0]. Columns are numbered starting at 0 [2/0]. If input files are specified then -N will add any missing columns.

-Q

Quick mode for scalar calculation. Shorthand for -Ca -N1/0 -T0/0/1. In this mode, constants may have plot units (i.e., c, i, p) and if so the final answer will be reported in the unit set by PROJ_LENGTH_UNIT.

-S[f|l]

Only report the first or last row of the results [Default is all rows]. This is useful if you have computed a statistic (say the MODE) and only want to report a single number instead of numerous records with identical values. Append l to get the last row and f to get the first row only [Default].

-T[min/max/inc[+b|i|l|n]|file|list]

Required when no input files are given. Builds an array for the “time” column (see -N). If there is no time column (i.e., your input has only data columns), give -T with no arguments; this also implies -Ca. For details on array creation, see Generate 1D Array.

-V[level]

Select verbosity level [w]. (See full description) (See cookbook information).

-bi[ncols][t] (more …)

Select native binary format for primary input.

-bo[ncols][type] (more …)

Select native binary output. [Default is same as input, but see -o]

-d[i|o]nodata (more …)

Replace input columns that equal nodata with NaN and do the reverse on output.

-e[~]“pattern” | -e[~]/regexp/[i] (more …)

Only accept data records that match the given pattern.

-f[i|o]colinfo (more …)

Specify data types of input and/or output columns.

-g[a]x|y|d|X|Y|D|[col]zgap[+n|p] (more …)

Determine data gaps and line breaks.

-h[i|o][n][+c][+d][+msegheader][+rremark][+ttitle] (more …)

Skip or produce header record(s).

-icols[+l][+ddivisor][+sscale][+ooffset][,][,t[word]] (more …)

Select input columns and transformations (0 is first column, t is trailing text, append word to read one word only).

-ocols[,…][,t[word]] (more …)

Select output columns (0 is first column; t is trailing text, append word to write one word only).

-q[i|o][~]rows[+ccol][+a|f|s] (more …)

Select input or output rows or data range(s) [all].

-s[cols][+a][+r] (more …)

Set handling of NaN records.

-wy|a|w|d|h|m|s|cperiod[/phase][+ccol] (more …)

Convert an input coordinate to a cyclical coordinate.

-^ or just -

Print a short message about the syntax of the command, then exit (NOTE: on Windows just use -).

-+ or just +

Print an extensive usage (help) message, including the explanation of any module-specific option (but not the GMT common options), then exit.

-? or no arguments

Print a complete usage (help) message, including the explanation of all options, then exit.

--PAR=value

Temporarily override a GMT default setting; repeatable. See gmt.conf for parameters.

Generate 1D Array

We will demonstrate the use of options for creating 1-D arrays via gmtmath. Make an evenly spaced coordinate array from min to max in steps of inc, e.g.,:

gmt math -o0 -T3.1/4.2/0.1 T =
3.1
3.2
3.3
3.4
3.5
3.6
3.7

Append +b if we should take log2 of min and max, get their nearest integers, build an equidistant log2-array using inc integer increments in log2, then undo the log2 conversion. E.g., -T3/20/1+b will produce this sequence:

gmt math -o0 -T3/20/1+b T =
4
8
16

Append +l if we should take log10 of min and max and build an array where inc can be 1 (every magnitude), 2, (1, 2, 5 times magnitude) or 3 (1-9 times magnitude). E.g., -T7/135/2+l will produce this sequence:

gmt math -o0 -T7/135/2+l T =
10
20
50
100

For output values less frequently than every magnitude, use a negative integer inc:

gmt math -o0 -T1e-4/1e4/-2+l T =
0.0001
0.01
1
100
10000

Append +i if inc is a fractional number and it is cleaner to give its reciprocal value instead. To set up times for a 24-frames per second animation lasting 1 minute, run:

gmt math -o0 -T0/60/24+i T =
0
0.0416666666667
0.0833333333333
0.125
0.166666666667
...

Append +n if inc is meant to indicate the number of equidistant coordinates instead. To have exactly 5 equidistant values from 3.44 and 7.82, run:

gmt math -o0 -T3.44/7.82/5+n T =
3.44
4.535
5.63
6.725
7.82

Alternatively, give a file with output coordinates in the first column, or provide a comma-separated list of specific coordinates, such as the first 6 Fibonacci numbers:

gmt math -o0 -T0,1,1,2,3,5 T =
0
1
1
2
3
5

If you only want a single value then you must append a comma to distinguish the list from the setting of inc.

If the module allows you to set up an absolute time series, append a valid time unit from the list year, month, day, hour, minute, and second to the given increment; add +t to ensure time column (or use -f). Note: The internal time unit is still controlled independently by TIME_UNIT. The first 7 days of March 2020:

gmt math -o0 -T2020-03-01T/2020-03-07T/1d T =
2020-03-01T00:00:00
2020-03-02T00:00:00
2020-03-03T00:00:00
2020-03-04T00:00:00
2020-03-05T00:00:00
2020-03-06T00:00:00
2020-03-07T00:00:00

A few modules allow for +a which will paste the coordinate array to the output table.

Likewise, if the module allows you to set up a spatial distance series (with distances computed from the first two data columns), specify a new increment as inc with a geospatial distance unit from the list degree (arc), minute (arc), second (arc), meter, foot, kilometer, Miles (statute), nautical miles, or survey foot; see -j for calculation mode. To interpolate Cartesian distances instead, you must use the special unit c.

Finally, if you are only providing an increment and will obtain min and max from the data, then it is possible (max - min)/inc is not an integer, as required. If so, then inc will be adjusted to fit the range. Alternatively, append +e to keep inc exact and adjust max instead (keeping min fixed).

Operators

Choose among the following operators. Here, “args” are the number of input and output arguments.

Operator

args

Returns

ABS

1 1

abs (A)

ACOS

1 1

acos (A)

ACOSH

1 1

acosh (A)

ACSC

1 1

acsc (A)

ACOT

1 1

acot (A)

ADD

2 1

A + B

AND

2 1

B if A == NaN, else A

ASEC

1 1

asec (A)

ASIN

1 1

asin (A)

ASINH

1 1

asinh (A)

ATAN

1 1

atan (A)

ATAN2

2 1

atan2 (A, B)

ATANH

1 1

atanh (A)

BCDF

3 1

Binomial cumulative distribution function for p = A, n = B, and x = C

BPDF

3 1

Binomial probability density function for p = A, n = B, and x = C

BEI

1 1

Kelvin function bei (A)

BER

1 1

Kelvin function ber (A)

BITAND

2 1

A & B (bitwise AND operator)

BITLEFT

2 1

A << B (bitwise left-shift operator)

BITNOT

1 1

~A (bitwise NOT operator, i.e., return two’s complement)

BITOR

2 1

A | B (bitwise OR operator)

BITRIGHT

2 1

A >> B (bitwise right-shift operator)

BITTEST

2 1

1 if bit B of A is set, else 0 (bitwise TEST operator)

BITXOR

2 1

A ^ B (bitwise XOR operator)

CEIL

1 1

ceil (A) (smallest integer >= A)

CHICRIT

2 1

Chi-squared distribution critical value for alpha = A and nu = B

CHICDF

2 1

Chi-squared cumulative distribution function for chi2 = A and nu = B

CHIPDF

2 1

Chi-squared probability density function for chi2 = A and nu = B

COL

1 1

Places column A on the stack

COMB

2 1

Combinations n_C_r, with n = A and r = B

CORRCOEFF

2 1

Correlation coefficient r(A, B)

COS

1 1

cos (A) (A in radians)

COSD

1 1

cos (A) (A in degrees)

COSH

1 1

cosh (A)

COT

1 1

cot (A) (A in radians)

COTD

1 1

cot (A) (A in degrees)

CSC

1 1

csc (A) (A in radians)

CSCD

1 1

csc (A) (A in degrees)

DDT

1 1

d(A)/dt Central 1st derivative

D2DT2

1 1

d^2(A)/dt^2 2nd derivative

D2R

1 1

Converts Degrees to Radians

DENAN

2 1

Replace NaNs in A with values from B

DILOG

1 1

dilog (A)

DIFF

1 1

Forward difference between adjacent elements of A (A[1]-A[0], A[2]-A[1], …, NaN)

DIV

2 1

A / B

DUP

1 2

Places duplicate of A on the stack

ECDF

2 1

Exponential cumulative distribution function for x = A and lambda = B

ECRIT

2 1

Exponential distribution critical value for alpha = A and lambda = B

EPDF

2 1

Exponential probability density function for x = A and lambda = B

ERF

1 1

Error function erf (A)

ERFC

1 1

Complementary Error function erfc (A)

ERFINV

1 1

Inverse error function of A

EQ

2 1

1 if A == B, else 0

EXCH

2 2

Exchanges A and B on the stack

EXP

1 1

exp (A)

FACT

1 1

A! (A factorial)

FCDF

3 1

F cumulative distribution function for F = A, nu1 = B, and nu2 = C

FCRIT

3 1

F distribution critical value for alpha = A, nu1 = B, and nu2 = C

FLIPUD

1 1

Reverse order of each column

FLOOR

1 1

floor (A) (greatest integer <= A)

FMOD

2 1

A % B (remainder after truncated division)

FPDF

3 1

F probability density function for F = A, nu1 = B, and nu2 = C

GE

2 1

1 if A >= B, else 0

GT

2 1

1 if A > B, else 0

HSV2LAB

3 3

Convert h,s,v triplets to l,a,b triplets, with h = A (0-360), s = B and v = C (0-1)

HSV2RGB

3 3

Convert h,s,v triplets to r,g,b triplets, with h = A (0-360), s = B and v = C (0-1)

HSV2XYZ

3 3

Convert h,s,v triplets to x,t,z triplets, with h = A (0-360), s = B and v = C (0-1)

HYPOT

2 1

hypot (A, B) = sqrt (A*A + B*B)

I0

1 1

Modified Bessel function of A (1st kind, order 0)

I1

1 1

Modified Bessel function of A (1st kind, order 1)

IFELSE

3 1

B if A != 0, else C

IN

2 1

Modified Bessel function of A (1st kind, order B)

INRANGE

3 1

1 if B <= A <= C, else 0

INT

1 1

Numerically integrate A

INV

1 1

1 / A

ISFINITE

1 1

1 if A is finite, else 0

ISNAN

1 1

1 if A == NaN, else 0

J0

1 1

Bessel function of A (1st kind, order 0)

J1

1 1

Bessel function of A (1st kind, order 1)

JN

2 1

Bessel function of A (1st kind, order B)

K0

1 1

Modified Kelvin function of A (2nd kind, order 0)

K1

1 1

Modified Bessel function of A (2nd kind, order 1)

KN

2 1

Modified Bessel function of A (2nd kind, order B)

KEI

1 1

Kelvin function kei (A)

KER

1 1

Kelvin function ker (A)

KURT

1 1

Kurtosis of A

LAB2HSV

3 3

Convert l,a,b triplets to h,s,v triplets

LAB2RGB

3 3

Convert l,a,b triplets to r,g,b triplets

LAB2XYZ

3 3

Convert l,a,b triplets to x,y,z triplets

LCDF

1 1

Laplace cumulative distribution function for z = A

LCRIT

1 1

Laplace distribution critical value for alpha = A

LE

2 1

1 if A <= B, else 0

LMSSCL

1 1

LMS (Least Median of Squares) scale estimate (LMS STD) of A

LMSSCLW

2 1

Weighted LMS scale estimate (LMS STD) of A for weights in B

LOG

1 1

log (A) (natural log)

LOG10

1 1

log10 (A) (base 10)

LOG1P

1 1

log (1+A) (accurate for small A)

LOG2

1 1

log2 (A) (base 2)

LOWER

1 1

The lowest (minimum) value of A

LPDF

1 1

Laplace probability density function for z = A

LRAND

2 1

Laplace random noise with mean A and std. deviation B

LSQFIT

1 0

Let current table be [A | b] return least squares solution x = A \ b

LT

2 1

1 if A < B, else 0

MAD

1 1

Median Absolute Deviation (L1 STD) of A

MADW

2 1

Weighted Median Absolute Deviation (L1 STD) of A for weights in B

MAX

2 1

Maximum of A and B

MEAN

1 1

Mean value of A

MEANW

2 1

Weighted mean value of A for weights in B

MEDIAN

1 1

Median value of A

MEDIANW

2 1

Weighted median value of A for weights in B

MIN

2 1

Minimum of A and B

MOD

2 1

A mod B (remainder after floored division)

MODE

1 1

Mode value (Least Median of Squares) of A

MODEW

2 1

Weighted mode value (Least Median of Squares) of A for weights in B

MUL

2 1

A * B

NAN

2 1

NaN if A == B, else A

NEG

1 1

-A

NEQ

2 1

1 if A != B, else 0

NORM

1 1

Normalize (A) so max(A)-min(A) = 1

NOT

1 1

NaN if A == NaN, 1 if A == 0, else 0

NRAND

2 1

Normal, random values with mean A and std. deviation B

OR

2 1

NaN if B == NaN, else A

PCDF

2 1

Poisson cumulative distribution function for x = A and lambda = B

PERM

2 1

Permutations n_P_r, with n = A and r = B

PPDF

2 1

Poisson distribution P(x,lambda), with x = A and lambda = B

PLM

3 1

Associated Legendre polynomial P(A) degree B order C

PLMg

3 1

Normalized associated Legendre polynomial P(A) degree B order C (geophysical convention)

POP

1 0

Delete top element from the stack

POW

2 1

A ^ B

PQUANT

2 1

The B’th quantile (0-100%) of A

PQUANTW

3 1

The C’th weighted quantile (0-100%) of A for weights in B

PSI

1 1

Psi (or Digamma) of A

PV

3 1

Legendre function Pv(A) of degree v = real(B) + imag(C)

QV

3 1

Legendre function Qv(A) of degree v = real(B) + imag(C)

R2

2 1

R2 = A^2 + B^2

R2D

1 1

Convert radians to degrees

RAND

2 1

Uniform random values between A and B

RCDF

1 1

Rayleigh cumulative distribution function for z = A

RCRIT

1 1

Rayleigh distribution critical value for alpha = A

RGB2HSV

3 3

Convert r,g,b triplets to h,s,v triplets, with r = A, g = B, and b = C (in 0-255 range)

RGB2LAB

3 3

Convert r,g,b triplets to l,a,b triplets, with r = A, g = B, and b = C (in 0-255 range)

RGB2XYZ

3 3

Convert r,g,b triplets to x,y,x triplets, with r = A, g = B, and b = C (in 0-255 range)

RINT

1 1

rint (A) (round to integral value nearest to A)

RMS

1 1

Root-mean-square of A

RMSW

1 1

Weighted root-mean-square of A for weights in B

RPDF

1 1

Rayleigh probability density function for z = A

ROLL

2 0

Cyclicly shifts the top A stack items by an amount B

ROTT

2 1

Rotate A by the (constant) shift B in the t-direction

SEC

1 1

sec (A) (A in radians)

SECD

1 1

sec (A) (A in degrees)

SIGN

1 1

sign (+1 or -1) of A

SIN

1 1

sin (A) (A in radians)

SINC

1 1

sinc (A) (sin (pi*A)/(pi*A))

SIND

1 1

sin (A) (A in degrees)

SINH

1 1

sinh (A)

SKEW

1 1

Skewness of A

SQR

1 1

A^2

SQRT

1 1

sqrt (A)

STD

1 1

Standard deviation of A

STDW

2 1

Weighted standard deviation of A for weights in B

STEP

1 1

Heaviside step function H(A)

STEPT

1 1

Heaviside step function H(t-A)

SUB

2 1

A - B

SUM

1 1

Cumulative sum of A

TAN

1 1

tan (A) (A in radians)

TAND

1 1

tan (A) (A in degrees)

TANH

1 1

tanh (A)

TAPER

1 1

Unit weights cosine-tapered to zero within A of end margins

TN

2 1

Chebyshev polynomial Tn(-1<A<+1) of degree B

TCRIT

2 1

Student’s t distribution critical value for alpha = A and nu = B

TPDF

2 1

Student’s t probability density function for t = A, and nu = B

TCDF

2 1

Student’s t cumulative distribution function for t = A, and nu = B

UPPER

1 1

The highest (maximum) value of A

VAR

1 1

Variance of A

VARW

2 1

Weighted variance of A for weights in B

VPDF

3 1

Von Mises density distribution V(x,mu,kappa), with angles = A, mu = B, and kappa = C

WCDF

3 1

Weibull cumulative distribution function for x = A, scale = B, and shape = C

WCRIT

3 1

Weibull distribution critical value for alpha = A, scale = B, and shape = C

WPDF

3 1

Weibull density distribution P(x,scale,shape), with x = A, scale = B, and shape = C

XOR

2 1

B if A == NaN, else A

XYZ2HSV

3 3

Convert x,y,z triplets to h,s,v triplets

XYZ2LAB

3 3

Convert x,y,z triplets to l,a,b triplets

XYZ2RGB

3 3

Convert x,y,z triplets to r,g,b triplets

Y0

1 1

Bessel function of A (2nd kind, order 0)

Y1

1 1

Bessel function of A (2nd kind, order 1)

YN

2 1

Bessel function of A (2nd kind, order B)

ZCDF

1 1

Normal cumulative distribution function for z = A

ZPDF

1 1

Normal probability density function for z = A

ZCRIT

1 1

Normal distribution critical value for alpha = A

ROOTS

2 1

Treats col A as f(t) = 0 and returns its roots

Symbols

The following symbols have special meaning:

PI

3.1415926…

E

2.7182818…

EULER

0.5772156…

PHI

1.6180339… (golden ratio)

EPS_F

1.192092896e-07 (sgl. prec. eps)

EPS_D

2.2204460492503131e-16 (dbl. prec. eps)

TMIN

Minimum t value

TMAX

Maximum t value

TRANGE

Range of t values

TINC

t increment

N

The number of records

T

Table with t-coordinates

TNORM

Table with normalized t-coordinates

TROW

Table with row numbers 1, 2, …, N-1

ASCII Format Precision

The ASCII output formats of numerical data are controlled by parameters in your gmt.conf file. Longitude and latitude are formatted according to FORMAT_GEO_OUT, absolute time is under the control of FORMAT_DATE_OUT and FORMAT_CLOCK_OUT, whereas general floating point values are formatted according to FORMAT_FLOAT_OUT. Be aware that the format in effect can lead to loss of precision in ASCII output, which can lead to various problems downstream. If you find the output is not written with enough precision, consider switching to binary output (-bo if available) or specify more decimals using the FORMAT_FLOAT_OUT setting.

Notes On Operators

  1. The operators PLM and PLMg calculate the associated Legendre polynomial of degree L and order M in x which must satisfy -1 <= x <= +1 and 0 <= M <= L. x, L, and M are the three arguments preceding the operator. PLM is not normalized and includes the Condon-Shortley phase (-1)^M. PLMg is normalized in the way that is most commonly used in geophysics. The C-S phase can be added by using -M as argument. PLM will overflow at higher degrees, whereas PLMg is stable until ultra high degrees (at least 3000).

  2. Files that have the same names as some operators, e.g., ADD, SIGN, =, etc. should be identified by prepending the current directory (i.e., ./).

  3. The stack depth limit is hard-wired to 100.

  4. All functions expecting a positive radius (e.g., LOG, KEI, etc.) are passed the absolute value of their argument.

  5. The DDT and D2DT2 functions only work on regularly spaced data.

  6. All derivatives are based on central finite differences, with natural boundary conditions.

  7. ROOTS must be the last operator on the stack, only followed by =.

STORE, RECALL and CLEAR

You may store intermediate calculations to a named variable that you may recall and place on the stack at a later time. This is useful if you need access to a computed quantity many times in your expression as it will shorten the overall expression and improve readability. To save a result you use the special operator STO@label, where label is the name you choose to give the quantity. To recall the stored result to the stack at a later time, use [RCL]@label, i.e., RCL is optional. To clear memory you may use CLR@label. Note that STO and CLR leave the stack unchanged.

  1. The bitwise operators (BITAND, BITLEFT, BITNOT, BITOR, BITRIGHT, BITTEST, and BITXOR) convert a tables’s double precision values to unsigned 64-bit ints to perform the bitwise operations. Consequently, the largest whole integer value that can be stored in a double precision value is 2^53 or 9,007,199,254,740,992. Any higher result will be masked to fit in the lower 54 bits. Thus, bit operations are effectively limited to 54 bits. All bitwise operators return NaN if given NaN arguments or bit-settings <= 0.

  2. TAPER will interpret its argument to be a width in the same units as the time-axis, but if no time is provided (i.e., plain data tables) then the width is taken to be given in number of rows.

  3. The color-triplet conversion functions (RGB2HSV, etc.) includes not only r,g,b and h,s,v triplet conversions, but also l,a,b (CIE L a b ) and sRGB (x, y, z) conversions between all four color spaces. These functions behave differently whether -Q is used or not. With -Q we expect three input constants and we place three output results on the stack. Since only the top stack item is printed, you must use operators such as POP and ROLL to get to the item of interest. Without -Q, these operators work across the three columns and modify the three column entries, returning their result as a single three-column item on the stack.

  4. The VPDF operator expects angles in degrees.

Macros

Users may save their favorite operator combinations as macros via the file gmtmath.macros in their current or user directory. The file may contain any number of macros (one per record); comment lines starting with # are skipped. The format for the macros is name = arg1 arg2 … arg2 [ : comment] where name is how the macro will be used. When this operator appears on the command line we simply replace it with the listed argument list. No macro may call another macro. As an example, the following macro expects that the time-column contains seafloor ages in Myr and computes the predicted half-space bathymetry:

DEPTH = SQRT 350 MUL 2500 ADD NEG : usage: DEPTH to return half-space seafloor depths

Note: Because geographic or time constants may be present in a macro, it is required that the optional comment flag (:) must be followed by a space. As another example, we show a macro GPSWEEK which determines which GPS week a timestamp belongs to:

GPSWEEK = 1980-01-06T00:00:00 SUB 86400 DIV 7 DIV FLOOR : usage: GPS week without rollover

Active Column Selection

When -Ccols is set then any operation, including loading of data from files, will restrict which columns are affected. To avoid unexpected results, note that if you issue a -Ccols option before you load in the data then only those columns will be updated, hence the unspecified columns will be zero. On the other hand, if you load the file first and then issue -Ccols then the unspecified columns will have been loaded but are then ignored until you undo the effect of -C.

Absolute Time Column(s)

If input data have more than one column and the “time” column (id set via -N [0]) contains absolute time, then the default output format for any other columns containing absolute time will be reset to relative time. Likewise, in scalar mode (-Q) the time column will be operated on and hence it also will be formatted as relative time. Finally, if -C is used to include “time” in the columns operated on then we likewise will reset that column’s format to relative time. The user can override this behavior with a suitable -f or -fo setting. Note: We cannot guess what your operations on the time column will do, hence this default behavior. As examples, if you are computing time differences then clearly relative time formatting is required, while if you are computing new absolute times by, say, adding an interval to absolute times then you will need to use -fo to set the output format for such columns to absolute time.

Examples

Note: Below are some examples of valid syntax for this module. The examples that use remote files (file names starting with @) can be cut and pasted into your terminal for testing. Other commands requiring input files are just dummy examples of the types of uses that are common but cannot be run verbatim as written.

To add two plot dimensions of different units, we can run

length=`gmt math -Q 15c 2i SUB =`

To take the square root of the content of the second data column being piped through gmtmath by process1 and pipe it through a 3rd process, use

process1 | gmt math STDIN SQRT = | process3

To take log10 of the average of 2 data files, use

gmt math file1.txt file2.txt ADD 0.5 MUL LOG10 = file3.txt

Given the file samples.txt, which holds seafloor ages in m.y. and seafloor depth in m, use the relation depth(in m) = 2500 + 350 * sqrt (age) to print the depth anomalies:

gmt math samples.txt T SQRT 350 MUL 2500 ADD SUB = | lpr

To take the average of columns 1 and 4-6 in the three data sets sizes.1, sizes.2, and sizes.3, use

gmt math -C1,4-6 sizes.1 sizes.2 ADD sizes.3 ADD 3 DIV = ave.txt

To take the 1-column data set ages.txt and calculate the modal value and assign it to a variable, try

mode_age=`gmt math -S -T ages.txt MODE =`

To evaluate the dilog(x) function for coordinates given in the file t.txt:

gmt math -Tt.txt T DILOG = dilog.txt

To demonstrate the use of stored variables, consider this sum of the first 3 cosine harmonics where we store and repeatedly recall the trigonometric argument (2*pi*T/360):

gmt math -T0/360/1 2 PI MUL 360 DIV T MUL STO@kT COS @kT 2 MUL COS ADD @kT 3 MUL COS ADD = harmonics.txt

To use gmtmath as a RPN Hewlett-Packard calculator on scalars (i.e., no input files) and calculate arbitrary expressions, use the -Q option. As an example, we will calculate the value of Kei (((1 + 1.75)/2.2) + cos (60)) and store the result in the shell variable z:

z=`gmt math -Q 1 1.75 ADD 2.2 DIV 60 COSD ADD KEI =`

To convert the r,g,b value for yellow to h,s,v and save the hue, try

set hue = `gmt math -Q 255 255 0 RGB2HSV POP POP =`

To use gmtmath as a general least squares equation solver, imagine that the current table is the augmented matrix [ A | b ] and you want the least squares solution x to the matrix equation A * x = b. The operator LSQFIT does this; it is your job to populate the matrix correctly first. The -A option will facilitate this. Suppose you have a 2-column file ty.txt with t and b(t) and you would like to fit a the model y(t) = a + b*t + c*H(t-t0), where H is the Heaviside step function for a given t0 = 1.55. Then, you need a 4-column augmented table loaded with t in column 1 and your observed y(t) in column 3. The calculation becomes

gmt math -N4/1 -Aty.txt -C0 1 ADD -C2 1.55 STEPT ADD -Ca LSQFIT = solution.txt

Note we use the -C option to select which columns we are working on, then make active all the columns we need (here all of them, with -Ca) before calling LSQFIT. The second and fourth columns (col numbers 1 and 3) are preloaded with t and y(t), respectively, the other columns are zero. If you already have a pre-calculated table with the augmented matrix [ A | b ] in a file (say lsqsys.txt), the least squares solution is simply

gmt math -T lsqsys.txt LSQFIT = solution.txt

Users must be aware that when -C controls which columns are to be active the control extends to placing columns from files as well. Contrast the different result obtained by these very similar commands:

echo 1 2 3 4 | gmt math STDIN -C3 1 ADD =
1    2    3    5

versus

echo 1 2 3 4 | gmt math -C3 STDIN 1 ADD =
0    0    0    5

References

Abramowitz, M., and I. A. Stegun, 1964, Handbook of Mathematical Functions, Applied Mathematics Series, vol. 55, Dover, New York.

Holmes, S. A., and W. E. Featherstone, 2002, A unified approach to the Clenshaw summation and the recursive computation of very high degree and order normalized associated Legendre functions. Journal of Geodesy, 76, 279-299.

Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, 1992, Numerical Recipes, 2nd edition, Cambridge Univ., New York.

Spanier, J., and K. B. Oldman, 1987, An Atlas of Functions, Hemisphere Publishing Corp.

See Also

gmt, grdmath