Improving performance when looping over a dynamic array
It's a frequent requirement to loop over a large dynamic array. E.g.
A=<get a large array from somewhere> FOR F=1 TO DCOUNT(A,@AM) ..do stuff with A NEXT F
Some flavors of Pick Basic will optimize that DCOUNT so it's not done every time, but some will not. So to be safe, it's better to move that out of the loop:
A=<get a large array from somewhere> MAX.A=DCOUNT(A,@AM) FOR F=1 TO MAX.A ..do stuff with A NEXT F
But now you still have to deal with the "do stuff with A" part. This will probably involve extracting the current MV for A from the loop. I.e. extracting A<F>. The problem with this is the system will scan from the start of the array to the value you want. If the array is very large, this may take a significant amount of time. One way around this is to use the REMOVE statement, which "remembers" where in the array the last position was found, and can go right there on the next call.
OPEN 'TMP' TO TMP.F ELSE STOP 201,'TMP' T=SYSTEM(12) READ A FROM TMP.F, 'test3.txt' ELSE STOP "CANNOT READ TEST.FILE" CRT 'Read in:':SYSTEM(12)-T:'ms' * T=SYSTEM(12) MAX.A=DCOUNT(A,@AM) CRT MAX.A:' count in ':SYSTEM(12)-T:'ms' * L=0 T=SYSTEM(12) FOR F=1 TO MAX.A L+=LEN(A<F>) NEXT F CRT 'FOR Loop:':SYSTEM(12)-T:'ms' CRT L * L=0 T=SYSTEM(12) LOOP REMOVE X FROM A SETTING MORE L+=LEN(X) WHILE MORE DO REPEAT CRT 'LOOP/REMOVE:':SYSTEM(12)-T:'ms' CRT L
Which leads to a significant speedup, even without taking into account caching, the REMOVE command is still much faster.
Read in:11ms 100000 count in 2ms For loop:71504ms 9048987 LOOP/REMOVE:35ms 9048987