Minors

*To*: mathgroup at smc.vnet.net*Subject*: [mg52844] Minors*From*: "Robert M. Mazo" <mazo at uoregon.edu>*Date*: Tue, 14 Dec 2004 06:00:05 -0500 (EST)*Organization*: University of Oregon*Sender*: owner-wri-mathgroup at wolfram.com

The Minors command gives, as the (i,j) minor af an nxn matrix, what ordinary mathematical notation calls the (n-i+1,n-j+1) minor . I know how to work around this. It is explained on pg. 1195 of The Mathematica Book (version 4). My question here is, why did the programmers of Mathematica define Minors this unconventional way? They usually had a good reason for their programming quirks, but I can't think of a reason for this one. Can anyone enlighten me? Robert Mazo mazo at uoregon.edu

**Follow-Ups**:**Re: Minors***From:*Garry Helzer <gah@math.umd.edu>