MathGroup Archive 1996

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: programming competition

  • To: mathgroup at smc.vnet.net
  • Subject: [mg4915] Re: [mg4866] Re: programming competition
  • From: "w.meeussen" <w.meeussen.vdmcc at vandemoortele.be>
  • Date: Fri, 4 Oct 1996 00:17:48 -0400
  • Sender: owner-wri-mathgroup at wolfram.com

At 22:42 26-09-96 -0400, you wrote:
>Xah wrote:
>>Problem:
>>Suppose you have a ordered list {p1, p2, ... , pn} where points has the form 
>>{x,y}. You want to modify the list so any neighboring points will have a 
>distance 
>>less or equal to a given value maxLength. You do this by adding points in 
>between 
>>points. For example, suppose p3 and p4 has length greater than maxLength. 
>Your new 
>>list should then be
>>{p1, p2, p3, newP1, newP2,..., newPm, p4, p5, ... , pn} where 
>>newP1, ...newPm lies on the line p3 p4.
>
>This was Xah's list of points:
>
>In[26]:=
>   li =  Join[ {{0,0},{1,1}}, Table[{i,1}, {i,1,2,1/5}], {{2,1}, {3,3}}];
>
>Xah gave linearInterpolate[] as a solution.
>

(****************** portion deleted ************************)
>
>In[34]:=
>   linearInterpolate2[ li, .3]
>Out[34]=
>   {{0.2, 0.2}, {0.4, 0.4}, {0.6, 0.6}, {0.8, 0.8}, {0, 0}, {1, 1}, 
> 
>               6       7       8       9
>      {1, 1}, {-, 1}, {-, 1}, {-, 1}, {-, 1}, {2, 1}, {2.125, 1.25}, 
>               5       5       5       5
> 
>      {2.25, 1.5}, {2.375, 1.75}, {2.5, 2.}, {2.625, 2.25}, 
> 
>      {2.75, 2.5}, {2.875, 2.75}, {2, 1}, {3, 3}}

(****************** portion deleted ************************)

>
>This is better, but it still doesn't give the answer in the form Xah 
>specified; the new points are not positioned in the list between the points
>they're being interpolated between, i. e. they are not
>{p1, p2, p3, newP1, newP2,..., newPm, p4, p5, ... , pn}.
>
>With Lichtblau's and Hayes' admonitions to use procedural programming 
>to solve procedural problems ringing freshly in my ears, I offer the
>following:
>
(****************** portion deleted ************************)
>In[70]:=
>   insertPoints[li, .3]
>Out[70]=
>   {{0, 0}, {0.212132, 0.212132}, {0.424264, 0.424264}, 
> 
>      {0.636396, 0.636396}, {0.848528, 0.848528}, {1., 1.}, {1., 1.}, 
> 
>      {1.2, 1.}, {1.4, 1.}, {1.6, 1.}, {1.8, 1.}, {2., 1.}, {2., 1.}, 
> 
>      {2.13416, 1.26833}, {2.26833, 1.53666}, {2.40249, 1.80498}, 
> 
>      {2.53666, 2.07331}, {2.67082, 2.34164}, {2.80498, 2.60997}, 
> 
>      {2.93915, 2.8783}, {3., 3.}}
>
>Xah's description of the problem didn't say anything about having the
>points evenly spaced, so they're not. But it would be simple to modify
>one line of code so they were evenly spaced.
>
>As Lichtblau points out, repeated copying of lists makes the problem 
>O(n^2).
>
>In[137]:=
>   pointsList1 = Table[{Random[], Random[]}, {100}];
>   pointsList2 = Table[{Random[], Random[]}, {1000}];
>
>In[139]:=
>   linearInterpolate2[ pointsList1, .3]; // Timing
>   linearInterpolate2[ pointsList2, .3]; // Timing
>Out[139]=
>   {3.8 Second, Null}
>Out[140]=
>   {15.75 Second, Null}
>
>Avoiding copying of lists keeps the problem to O(n)
>
>In[143]:=
>   insertPoints[pointsList1, .3]; // Timing
>   insertPoints[pointsList2, .3]; // Timing
>Out[143]=
>   {2.4 Second, Null}
>Out[144]=
>   {4.25 Second, Null}
>
>I periodically need to do fairly complicated operations on
>small lists. Using the list manipulation functions may make the 
>problem O(n^2), but for small lists this doesn't really matter.
>Using Append[], Rest[], and similar functions simplifies the
>code and makes it easier to develop and debug, compared to 
>using a procedural approach and trying to write the fastest
>code possible. In Xah's and Richard Gaylord's (actually, Don 
>Piele's) problems it was easier to develop a solution using
>a procedural approach, but it is often easier to use an approach
>that results in slower code. What I ask from Mathematica is that
>the code be as clear and self-documenting as possible, the
>development process be as short and painless as possible, and
>the result be correct. If speed were that important to me
>I'd be writing in a compiled language. The problem with many
>blocks of code is not that they're slow, but that
>they're confusing. This makes it more  difficult to debug
>and be sure of getting a correct result. Overall, I think
>top-down design and attention to formatting are more important
>that speed. Because of it's clarity, I liked Paul Abbott's pattern
>matching approach to the Gaylord-Piele problem. It worked fine 
>on short lists and you could tell at a glance what the code was 
>doing.
>
>-- 
>Bob Hall            | "Know thyself? Absurd direction!
>rhall2 at gl.umbc.edu  |  Bubbles bear no introspection."  -Khushhal Khan Khatak
>
>
>
>
Bob,

it seems to be a kind of problem suitable to a rule-based attack :

for a simplified one-dimensional case :
test=Table[n+Random[Real,{1,10}],{n,0,1000,10}]

a rule "rule1" is defined that shows the inserted numbers as lists :

rule1={a___,b_/;NumberQ[b],c_/;NumberQ[c],d___}/;Abs[b-c]>dif:>
        {a,Range[b,c-dif,dif],c,d}

Timing[test//.rule1]

{2.242 Second, {{6.00759, 9.00759, 12.0076}, {15.7191}, 
... etc ...
  {963.993, 966.993}, {972.051, 975.051, 978.051}, {982.462, 985.462, 988.462}, 
{993.766, 996.766, 999.766, 1002.77, 1005.77}, 1009.67}}

a simple Flatten[] gets rid of the extra list-envelopments.
So far, still lean & mean, don't you think?
Now, in two dimensions, why not use complex numbers, and keep the same
strategy ?

testc=Table[{n+Random[Real,{1,10}],n+Random[Real,{1,10}]},{n,0,1000,10}]

rule2={a___,	b_/;NumberQ[b],  c_/;NumberQ[c],  d___} /;  Abs[b-c]>dif   :>
	{a,	Range[b,c-(c-b) dif/Abs[c-b],  (c-b) dif/Abs[c-b]], c,	d}

we only need to switch between {x,y} and x+Iy and back :

Timing[{Re[#],Im[#]}&/@Flatten[Apply[(#1+I #2) &, testc,1]//.rule2]]

gives:
{5.013 Second, {{6.67336, 4.32093}, {8.47046, 6.72311}, {10.2675, 9.12529}, 
        ... etc ...
{917.71, 920.338}, {921.006, 924.902}, {923.581, 926.442}, {926.156,
927.982}, {928.73, 929.522},
{931.97, 931.459}, {933.657, 933.94}, {935.344, 936.421}, {937.031,
938.901}, {938.718, 941.382},
{940.406, 943.863}, {943.143, 947.888}, {945.831, 949.221}, {948.518,
950.554}, {951.281, 951.925},
{953.626, 953.796}, {955.972, 955.667}, {958.317, 957.537}, {960.662,
959.408}, {963.008, 961.279},
{965.353, 963.149}, {968.588, 965.73}, {969.662, 968.532}, {970.735,
971.333}, {971.808, 974.135},
{973.642, 978.922}, {976.562, 979.608}, {979.483, 980.293}, {982.403,
980.979}, {985.324, 981.665},
{989.648, 982.68}, {990.212, 985.627}, {990.775, 988.574}, {991.339,
991.52}, {991.902, 994.467},
{992.545, 997.828}, {995.042, 999.49}, {997.54, 1001.15}, {1000.04,
1002.81}, {1002.92, 1004.73}}}

I admit that it took me some time to get the Range[...] arguments right
(needs rule delayed ":>", not ReplaceAll "->" to avoid unpleasant and
unhelpful red remarks about the arguments not being Integers (silly, they
need't be Integers at all). Also, I fell into the infinite regression twice
(because of //. ).

All in all, the ReplaceRepeated just scans over the list once, because the
replacement generates no numbers, but lists.

Final remark :
it would be more elegant if a condition like 
        b_Number
were allowed, in stead of
        b_/;NumberQ[b]
I lost quite some time on that.

Wouter.


==== [MESSAGE SEPARATOR] ====


  • Prev by Date: Evaluation control in Switch
  • Next by Date: Re: functional code
  • Previous by thread: Re: programming competition
  • Next by thread: Re: programming competition