Mathematica 9 is now available
Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2009

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Mathematica 7.0.1.0 and some General Comments

  • To: mathgroup at smc.vnet.net
  • Subject: [mg97438] Re: [mg97384] Re: Mathematica 7.0.1.0 and some General Comments
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Fri, 13 Mar 2009 04:50:34 -0500 (EST)
  • References: <goqphr$lt2$1@smc.vnet.net> <gp5fou$9nr$1@smc.vnet.net> <200903120719.CAA23555@smc.vnet.net>

Mariano Suárez-Alvarez wrote:
> [...]
> Well, if you come up with a proof of a theorem
> which depends on non-trivial Mathematica code
> to do non-trivial computations, in what way can
> you possibly say that you know how the proof works,
> if *you* yourself, the author of the proof, do not
> know what Mathematica is really doing? Using
> closed-source code simply goes against the very
> spirit of open review which is essential to
> the scientific endeavor.
> 
> There was a recent discussion in this subject
> on the AMS Notices, which you can get at
> <http://www.ams.org/notices/200710/tx071001279p.pdf>.
> 
> -- m

It's the scientific journal equivalent of an op-ed article. It was part 
of an effort to encourage public (and perhaps also private) funding for 
development of certain open source mathematical software. You will be 
hard pressed to find vestiges of that effort, as all relevant web pages 
seem to have been yanked.

As to the larger issue of proof believability...that's something of a 
moving target. Computer verification of proofs (whether human or 
computer generated) is something of a hot topic, among other reasons 
because it is related to computer chip verification. One comment I 
really like comes from a practitioner in the field, John Harrison of 
Intel: "Nothing is ever 100% certain, and a foundational death spiral
adds little value." I do not claim to be entirely certain of the 
intended meaning, but I was present once when he said this (plenary talk 
at ICMS 2006). I had the strong impression he meant that one should 
trust those computations which are obviously not going to mess up your 
proof (and yes, "obviously" is context dependent and requires judgement).

A reason this is a moving target is that an ever increasing set of 
"difficult" proofs involve hard core computer computations. Which ones 
do we trust, and under what circumstances?

Here is an example. It is from a paper I have cited:
A. Zinani (2003). The expected volume of a tetrahedron whose vertices 
are chosen at random in the interior of a cube. Monatshefte f\u"r 
Mathematik 139:341-348.

In it the author states "Nonetheless the calculations require a great 
effort and necessarily entailed the use of a computer algebra package. 
We used the package Mathematica 4.0." This was to find several definite 
integrals. Do we trust the proof? I know enough about symbolic definite 
integration to not trust all such integrals as reliable. These, however, 
are all numeric and thus can be checked to close approximation by 
quadrature. Does this suffice for us to trust the proof? I believe this 
is a matter of opinion, and one that is gradually changing over time.

Let me describe a common type of example. Many algebraic proofs can be 
accomplished via computation of a cylindric algebraic decomposition or a 
Groebner basis. In some cases there are relatively straightforward means 
of verifying the computations, but this is not always the case. When 
should we trust these computations? If they can be repeated by different 
programs? One thing I can say is that, despite the relative simplicity 
of, say, the Buchberger algorithm for Groebner bases, code to compute 
them can still contain bugs. Why? Because speed is important, and the 
effort to get good speed often adds complexity to the code. (This is 
true of far more basic computations such as, say integer greatest common 
divisors.) Neither open source nor proprietary software are immune to 
algorithm complexity-induced bugs.

I do not defend the lack of documentation of some of the more 
sophisticated algorithms in Mathematica (or elsewhere for that matter, 
but I bear no responsibility for other software). Having such 
documentation would serve many useful purposes. Giving the user (or 
journal referee) some indication of what are the underlying methods of 
computation is certainly one such.

The issue of "open source" is a different matter entirely. There are 
very good reasons for having open source software. In the case of 
mathematical software, making proofs "transparent" is not one of those 
reasons. Claims to the contrary are, well, considerably more politics 
than science (I'm being polite here). That complicated under-the-hood 
software might be available for perusal adds quite little to the process 
of actual verification, and moves not one iota the bar for what are 
those computations that "obviously" should be trusted.

This is not an appropriate forum for a debate on proprietary vs. open 
source math software. I will only remark that I think the world can use 
substantial amounts of both, and there are places in which they can and 
should intersect (code libraries being an important example).

Daniel Lichtblau
Wolfram Research
(This being largely an opinion piece, I will emphasize that I am not 
speaking for my employer).



  • Prev by Date: Re: Re: Re: Mathematica 7.0.1.0 and some General
  • Next by Date: Re: Mathematica 7.0.1.0 and some General Comments
  • Previous by thread: Re: Re: Mathematica 7.0.1.0 and some General Comments
  • Next by thread: Re: Mathematica 7.0.1.0 and some General Comments