Like the bombing/scanning step for stones and scanners the spread constants (the distance between the silk copies)
can make the differences beetween a decent and a good warrior.
Corestep.c by Jay Han and Mopt by Stefan Strack, available at the FTP site,
can give you a starting point, but for replicators the job is, far more
complex because they change their constants in the spread process. Therefor
there are some possible ways how to optimize a paper. The most simplest way
would be to conceive some constants and watching via the graphic interface of
pmars how well the paper (optically) spread. The best candidates could be tested
then against a benchmark.
But for todays demands we need some more sophisticated methods.
5.1 Using CDB Makros:
As a first approach to optimize paper constants CDB were used intensively. The way of optimization
is totally different compared to todays approaches. The main attention was pointed to:
"What happens
with the paper in the core " and not, like today:
"How does the paper score against warrior A, B, C, etc."
Below is the description which were found in the early issues of Corewarrior:
The goal is to run your warrior with pmars and use cdb macros that change code sections and record the result. The optimization are demonstrated on a slighly "un-optimized" version of T.Hsu's Ryooki:
nxt_paper equ 100 ;chosen with room for improvement
boot_paper spl 1 ,>4000
mov.i -1,#0
mov.i -1,#0
paper spl @paper,<nxt_paper ; A-fld is src, B-fld is dest
copy mov.i }paper,>paper
mov.i bomb ,>paper ; anti-imp
mov.i bomb ,}800 ; anti-vampire
jmn.f @copy ,{paper
bomb dat <2667 ,<2667*2
and we want to find a better offset between copies than the "100" in the
nxt_paper EQU. First we need to come up with some good
ways to measure an even spread between paper bodies in core. Here's an
approximation that cdb can easily provide:
after a few thousand cycles, a paper with a good offset
1) has more processes
2) covers more core locations
than a paper with a bad offset
Now the idea is simply to run multiple rounds, systematically changing the
silk offset at the beginning of each round, and having cdb report process
number and number of covered core locations after 5000 cycles or so. This can
all be automated with macros, so you can have pmars find optimal constants
while you get coffee (jolt? :). Once you have a few candidate offsets, you
should make sure they're working as you expect by looking at the core
display. You can than go on to find optimal bombing constants for your
set of optimal offsets in pretty much the same manner. As an example using
Ryooki above:
pmars -br 1000 -e ryooki.red
00000 SPL.B $ 1, > 4000
(cdb) 0,7
00000 SPL.B $ 1, > 4000
00001 MOV.I $ -1, # 0
00002 MOV.I $ -1, # 0
00003 SPL.B @ 0, < 100
00004 MOV.I } -1, > -1
00005 MOV.I $ 3, > -2
00006 MOV.I $ 2, } 800
00007 JMN.F @ -3, { -4
(cdb) calc i=99
99
This sets a variable "i" to our starting constant.
(cdb)@ed 3~spl @0,<i=i+1~@sk 5000~@pq~ca i,$+1~@pq off~m count~@go~@st
100,987
1830
(cdb)
This is a bit complicated. The "@ed 3~spl @0,<i=i+1" sequence edits address
3 and writes to it the instruction "SPL @ 0, < 100", having incremented
the "i" variable by 1. "@sk 5000" executes 5000 cycles silently, "@pq"
then switches into "process queue" display/edit mode. "calc i,$+1" echoes
the current value of the "i" variable, followed by the number of processes
("$" is the number of the last process). The output is seen on the next line:
"100,987". "@pq off" then switches back into core display/edit mode.
"macro count" executes a macro that is already defined in pmars.mac; the
"count" macro simply echoes the number of core locations that have anything
other than "dat 0,0" in them (here: 1830). Finally, "@go~@st" advance to the
end of this round and to the first cycle of the next round.
When you now press <Enter>, the command sequence is repeated with an offset
value of 101:
(cdb) <Enter>
101,1058
1971
(cdb)
The 101 offset results in a greater number of processes (1058) and more
addresses written to (1971). If you want to run the whole thing automated,
just inclose the command sequence in a loop (!!~...~!) and send the
results to a file like so:
(cdb) ca i=99
99
(cdb) write ryooki.opt
Opening logfile
(cdb) !!~&ed 3~spl @0,<i=i+1~&sk 5000~&pq~ca i,$+1~&pq off~m count~&go~&st~!
To avoid sending _a_lot_ of garbish output to the log file, we have to use &
in stead of @ in this macro and in the macro count in pmars.mac; just edit it.
count= &ca z=.~m w?~&ca x=.,c=0~!!~m w?~&ca c=c+1~if .!=x~!~ca c~&l z
w?= &search ,
You can easily make this more complicated by only echoing
#processes/locations if the values are larger than anything so far (left as an
exercise to the reader), but at this point you are probably ready
to save yourself some typing by defining your own macros. Remember that
you can add macros from within the cdb session using the "@macro ,user"
command (a shorthand is "m="). You could even replace the rather simplistic
check for #processes/locations with a more elaborate macro that calculates
the variance of intervals between papers.
Now we are ready to start making a paper warrior, what we have to do is
putting things together and begin working.
First the structure, we'll make a mid-size warrior, 8 lines, so we need 8
processes.
start spl 1, <300 ;so we make 8 parallel processes
spl 1, <400 ;the <### are not needed to make it work
spl 1, <500 ;but may damage something and cost nothing
silk spl @0, {dest0
mov.i }-1, >-1
silk1 spl @0, <dest1
mov.i }-1, >-1
mov.i bomba, }range
mov {silk1, <silk2
silk2 jmp @0, >dest2
bomba dat <2667, <1
Now the constants: dest0 is the less used, let's take a modulo 200 value,
for dest1 we take a mod 20 one. Now we begin optimization using Stefan
method. I have a rather slow computer so I choosed to analyze but values
ranging from -2000 to -1000. Before doing so I changed the mov bomb line in
a nop instruction, optimizing bombing will come later.
Running Stefan's macro I got -1278 as best value.
Then I replaced the nop with a mov and runned again the macro, choosing a
range for bombing beetween 500 and 1000. Best value 933
5.2 Using Random Generated Constants:
Since the days of introducing the optimization via CDB makros, the methods of optimizing papers
has changed from a more "core watching" approach (
A) to a pure "benchmarking against a set of warriors" approach (
B).
Well, what's the reason for this change? A minor reason is for shure that approach
B needs much, much more
processor time which is therefor much more time-consuming. But mainly, because you get much better results than
using approach
A. Why? Because a good spreading as described in Chapter 5.1 isn't the only important factor.
The situation is much more complex. Much too complex to find an ultimate solution. A paper doesn't score good against a scanner just because of its good spreading. The reason could be also that it takes advantage of the
scanners stepsize or that the paper overwrites/bombs its own code he left behind (which could possibly wiped by
the scanner). All these effects you can't utilize just by watching the core. So, the simplest way is the best: Use
randomly generated constants and look how the paper scores against a set of warriors.
The order of constant optimization:
There are two possible ways to optimize a paper.
1. Change the bomb instructions in your paper into nop 1, 1 and optimize first the silk steps. If you found
good ones include the bomb instruction and optimize the bombing constants.
2. Optimize all constants together.
I've made the experience that 1. works best to make a paper resistant against scanner and 2. works best to
create a good anti-imp paper.
Please tell me your experiences about that.
The set of warriors:
You must always keep in mind the importance of the set of warriors you need to benchmark your warriors. It must be
carefull choosen to obtain the best possible results. Take some different kind of stone/imps if you want to create
a good anti-imp paper. Or take some different scanner if you want a scanner resistant paper.