Game changer

One of the recurring issues with Wallace, in those rare times when I’m actually working on it, is the difficulty of interfacing Wallace with the NES emulation engine. Emulators generally aren’t written with the goal of being driven by another program; as a result my most recent effort at it involved a lot of ugly hacks to Mednafen to make it spew some useful data to stdout, and to cleverly disguise controller inputs generated by Wallace as a movie file.

However, I discovered by accident while wasting time I could’ve been spending on Panflute instead of browsing the epic time sink that is TV Tropes that some modern console emulators have built-in scripting support, and in FCEUX‘s case, the scripting support is also part of the often-relatively-neglected Linux version. Said scripting support offers a relatively straightforward way to write add-on code that mucks around with the game running inside the emulator, which would be orders of magnitude more elegant and maintainable than the approaches Wallace has taken up until now.

What do I mean by “muck around”? Well, you could add a way to drag-and-drop enemies in Super Mario Bros.:

Or give Mega Man kill-anything laser eyes (maybe he defeated Mr. Flibble Man?) in Mega Man 2:

Or replace the control scheme in Super Mario Bros. 3 with the one in Kirby: Canvas Curse, drawing platforms and obstacles on the screen using the mouse instead of having direct control over Mario:

Or add head-to-head network play to Nintendo’s Tetris game. You know, the one that doesn’t actually have a multiplayer mode to begin with:

Note that all that craziness is being implemented using Lua scripts in the emulator, without doing any hacking of the ROMs themselves. In FCEUX, loading a Lua script gives it control of the emulation loop and do pretty much whatever it wants, including poking around in game memory and messing with controller inputs but also doing anything else that can be done in Lua.

For the time being I’m not working on porting Wallace to FCEUX+Lua, but that’s mostly because I need to get Panflute in a releasable state and resurrecting Wallace right now would be too great a distraction.

Wallace Jr. Grows On Trees

I’ve updated Wallace Jr. to support actual genetic programming, instead of just genetic algorithms. That is, Wallace Jr. now generates programs that try to play video games, instead of just generating predefined controller input sequences.

Naturally, the language used by Wallace Jr. for its evolved programs is extremely limited. In fact, it’s really more of a language for writing expressions rather than what you’d typically consider a “program”: there’s no variables or subroutines or even loops. Here’s the entire (informal) specification of the language:

Expression Definition
0x00, 0x01, …, 0xff Constants in the range 0 – 255
(+ a b) Add a and b, modulo 256
(- a b) Subtract b from a, modulo 256
(* a b) Multiply a and b, modulo 256
(& a b) Bitwise AND a and b
(| a b) Bitwise OR a and b
(^ a b) Bitwise XOR a and b
(ifgt a b t f) If a > b, return t; otherwise, return f
(read a) Return the byte at address a in sprite RAM

That’s it. By combining those operations, you can read data from sprite RAM (which among other things, will tell you where each sprite on the screen is at), do computations on them, and ultimately return a one-byte value interpreted as the next controller input.

Here’s an example of what a program written in this language would look like:

(& (+ 0x1b (& (- (| 0x77 (^ 0x60 (read 0x81))) 0x23) (& 0x10 (& 0x28 (read 0xbe))))) (^ 0x56 (ifgt 0x62 (read (ifgt (| (read 0xa8) 0xfe) 0x93 0x30 0x34)) 0x23 (ifgt (+ 0xbd (read 0x70)) (^ 0xef (- (read 0x33) 0x4d)) (+ (& 0x27 (& 0x78 (ifgt (read 0x58) 0x41 0x76 0xb6))) (^ (read 0xe7) 0x9c)) (+ 0xb0 (read 0x08))))))

Strictly speaking, none of the parentheses are necessary. Since the language uses Polish notation and each operation has fixed arity, parsing an expression is unambiguous even without parentheses — an advantage not held by the more conventional (to the average person, anyway) use of infix notation for arithmetic operations.

In other words, the above mess means the same thing as the following mess, where parentheses have been removed to, um, change readability. Whether readability is improved or hindered is left as an exercise for the reader.

& + 0x1b & - | 0x77 ^ 0x60 read 0x81 0x23 & 0x10 & 0x28 read 0xbe ^ 0x56 ifgt 0x62 read ifgt | read 0xa8 0xfe 0x93 0x30 0x34 0x23 ifgt + 0xbd read 0x70 ^ 0xef - read 0x33 0x4d + & 0x27 & 0x78 ifgt read 0x58 0x41 0x76 0xb6 ^ read 0xe7 0x9c + 0xb0 read 0x08

Maybe just drawing it as an expression tree would help:

Generation 3 top performer, optimized version
(click for full version)

In the tree, each term appears as a node, with the arguments to that term shown as children of the node. Evaluation of the expression goes from the bottom up; as new subterms are evaluated, the results get passed up until you reach the root node at the top, at which time you have the value of the expression.

The above expression was one generated in a run of Wallace Jr. in the fourth generation using pretty much the same input as last time, only using trees instead of controller input sequences. This time, the configuration looked something like this:

mednafen        /home/paul/mednafen-eval/instdir/bin/mednafen
 
buttons         right left up down b a
rom             /home/paul/roms/nes/Mega Man 2 (U).nes
initial_state   /home/paul/.mednafen/mcs/Mega Man 2 (U).0527a0ee512f69e08b8db6dc97964632.nc0
organism_type   tree
organism_size   1000
organism_depth  7
subtree_depth   4
granularity     5
population_size 500
generations     30
parallelism    2
debug           no
export_tail    1200
 
metric          health    0x06c0 1
metric          enemy     0x06c1 1
functions       megaman2.py
 
tournament_size             50
homologous_crossover_rate   0
arbitrary_crossover_rate    80
point_mutation_rate         5
subtree_mutation_rate       5
reproduction_rate           10
 
point_mutation_probability  0.05
crossover_leaf_probability  0.20

Here, Wallace Jr. is starting off with 500 randomly generated expression trees of maximum depth 7 — i.e., each starting tree has one root and up to seven levels below it. Each new generation is produced through crossover (swapping one node [and its children] with a node [and its children] from another tree), point mutation (randomly changing the content of some nodes without changing its arity), subtree mutation (replacing one node and its children with a randomly generated subtree), and reproduction (copying unchanged from the previous generation).

The fitness graph over 30 generations looked like this:

Fitness over 30 generations

It’s interesting how quickly things plateaued at a maximum fitness of 12 so quickly and never improved from there. Something similar happened last time, but not nearly so quickly. I’m not entirely sure why there was no improvement past a fitness of 12, but I suspect it’s because doing better than that actually requires fairly precise play. A fitness of 12 is about what you wind up with if you consistently deal damage to Air Man but get hit by a couple tornadoes between his jumps.

A little manual play lends some support to this hypothesis: taking the controller myself, my own fitness ranged from 12-16 most of the time, with 20 being a personal best. That 20 was very hard to come by, requiring some very carefully timed jumps to avoid near-unavoidable tornadoes. In this scenario, a fitness of 12 could be the point of diminishing returns, where each incremental improvement in fitness starts being much harder to come by.

Actually, when I said the expression I showed earlier was one generated by Wallace Jr., I lied. It’s actually an optimized version of this mess, which was the top performer in the fourth generation (and the first to achieve a fitness of 12):

Top performer of generation 3
(click for full version)

It’s quite a bit larger than the optimized version, since it’s doing silly things like computing (& 0x19 0x4a) each time instead of just using a constant 0x08. That’s pretty much all the optimizer in Wallace Jr. does, mainly to make the trees it generates more readable, or at least less unreadable, by simplifying things as much as possible.

After the third generation, there wasn’t any improvement in peak fitness. However, the expressions being generated tended to get bigger and bigger. Here’s the top performer of the thirtieth generation (or really, one of the many many trees tied for top performer), optimized:

Generation 29 top performer, optimized
(click for full version)

And in its full, unoptimized glory:

Generation 29 top performer, unoptimized
(click for full version)

Interestingly, if you compare the optimized versions of the Generation 29 top performer with the Generation 3 top performer shown above, you’ll see that the top four levels of the tree are, with the exception of one node, identical. This suggests the Generation 29 winner is descended from the Generation 3 winner. It would be interesting to study the full scope of their similarities, and the similarities with the winners in the intervening generations, to what parts of the expression are so seemingly essential that they’ve been preserved from one generation to the next.

For the time being, doing so is also left as an exercise for the reader.

The Perils of Pipes

In my last post, I mentioned this in regards to making Wallace Jr. and my hacked version of Mednafen:

It’s not insurmountable, but having programs talk back and forth to each other is always a bit tricky, especially to avoid deadlocks: A is waiting for B to say something, and B is waiting for A to say something. Especially when B was never intended to talk to A to begin with.

My prediction of running into difficulties proved to be all too accurate, even despite expecting problems to arise and being exceedingly careful to avoid them. Sadly, this hardly qualifies to win the JREF prize, any more than predicting that the sun will rise tomorrow would.

Recall what I’m trying to do here: have Wallace Jr. generate a sequence of controller inputs, have Mednafen execute them while playing a game, and have Wallace Jr. evaluate what happens in the game as a result. This means there has to be bidirectional communication: one channel from Wallace Jr. to Mednafen, and a second from Mednafen back to Wallace Jr.

In the original design, the path into Mednafen was pretty simple, since the input sequence was predetermined. Wallace Jr. generated an MCM file (a Mednafen movie file, consisting of an initial state and, well, a sequence of controller input), saved it to disk, and told Mednafen where to find it when it was launched. Mednafen, in turn, printed out status information, which Wallace Jr. read at its leisure.

For those of you not familiar with programming, allow me to elaborate on that last part a bit. Most languages provide three standard input/output streams to programs: one for input (stdin), one for normal output (stdout), and one for outputing error messages (stderr). By default, when you run a program from the command line, stdin comes from the keyboard, and stdout and stderr get printed out to the terminal window. To the program, these three streams look just like any other file; in fact, in C they’re even represented as FILE *, the same as you’d get if you called fopen to open a file.

Since I said by default the streams are connected to the keyboard and the terminal window, that obviously implies this isn’t always the case. When you create a new process, you’re free to connect its standard streams to whatever you want. That’s what Wallace Jr. did: when launching Mednafen, it attached its stdout stream to a pipe, so that whatever Mednafen writes to it, Wallace Jr. can read it.

The current version of Wallace Jr. goes one further, attaching pipes both to Mednafen’s stdout and stdin streams:

child = subprocess.Popen ([self.executable_path,
                                "-video", self.debug and "1" or "0",
                                "-sound", self.debug and "1" or "0",
                                "-nothrottle", "1",
                                "-movie", "/dev/stdin",
                                "-metrics", self.project_file,
                                self.rom_file],
                            stdin=subprocess.PIPE,
                            stdout=subprocess.PIPE,
                            close_fds=1)

But wait, you object, the arguments I’m passing to Mednafen still tell it to read its movie from a file — particularly, a file named /dev/stdin. What gives? /dev/stdin is just a symlink to /proc/self/fd/0, which is in turn a symlink to whatever the current process’s stdin stream is. So really, giving Medafen a file name of /dev/stdin is just telling it to read from its stdin.

(If that sounds like some kind of voodoo to you, keep in mind that on Unix-based systems, everything is a file. Files in the conventional sense of “a bunch of bytes with a name and stored on a disk” is just one type of file — files can also be pipes or devices or almost anything else. Everything in /proc is some type of information about the processes running on the system, exposed as a set of files. The underlying data is stored not on disk, but in the kernel‘s internal data structures.)

Anyway, the ultimate goal in sending Mednafen controller inputs via a pipe instead of a file is so that, in the future, Wallace Jr. will be able to generate programs that decide the next controller input based on the current state of the game. To do that, obviously, it needs to see the game state at time t before the controller input at time t+1 can be sent. Writing all the controller input to a file ahead of time is right out.

If everything’s working, what should happen is that Wallace Jr. does a little processing, sends controller input to Mednafen, waits for Mednafen to respond with the game state, and repeats. Meanwhile, Mednafen waits for Wallace Jr. to send it controller input, emulates the next frame of the game, sends the updated game state to Wallace Jr., and repeats. If these ever get out of sync — namely, if both Mednafen and Wallace Jr. are waiting for the other to send something, you hit deadlock and nothing happens.

It’s easy to get wrong. Here’s two examples of how things went wrong.

As a proof-of-concept, I initially tried having Wallace Jr. send everything at once through the pipe. Deadlock. However, I found that if I closed Wallace Jr.’s side of the pipe going to Mednafen, it worked! That was weird, since I was being careful to flush the pipe after writing, to make sure the data was actually getting sent instead of sitting around in a buffer.

After banging my head against it for a while, I ultimately figured out that gzip was the culprit. Normally, Mednafen movie files, which is ultimately what’s being sent to it via the pipe, are compressed with gzip, and I initially had Wallace Jr. do so as well. Apparently, however, the code on the Mednafen side of things that handles decompression was waiting to reach the end-of-file before actually decompressing the data. That’s why Mednafen did nothing until Wallace Jr. closed its side of the pipe, which would cause Mednafen to finally detect end-of-file.

Naturally, closing the pipe isn’t an option when sending controller input one frame at a time, since there’s no way to re-open a pipe once it’s closed. Luckily, however, Mednafen is happy with getting uncompressed movie data sent to it; taking gzip out of the equation entirely fixed the problem.

The second deadlock I ran into took even longer to diagnose. Here’s a simplified version of the code that triggered it. What it’s supposed to do is generate the next input, send it to Mednafen’s stdin, and then read from Mednafen’s stdout until it sees a line that starts with “metrics”. When it does, it processes it and decides whether to loop or not. What it actually does is deadlock immediately when it tries to read from Mednafen. Do you see why?

while should_continue:
    controller_input = compute_next_input ()
 
    child.stdin.write (controller_input)
    child.stdin.flush ()
 
    should_continue = False
    for line in child.stdout:
        if line.startswith ("metrics"):
            should_continue = process_metrics (line)
            break

Your first guess is that Mednafen isn’t actually writing any lines that start with “metrics”. That wasn’t the problem. Your second guess is that Mednafen isn’t flushing its output buffer, so the line isn’t getting sent through the pipe. Wrong again.

Give up? I’m not surprised — it’s very non-obvious.

The “for line in file” construct in Python iterates through the contents of a file, one line at a time. Internally, this happens by Python invoking file.next() each time through the loop to get the next line. Here’s what the Python manual says about the next() method for file objects (emphasis added):

A file object is its own iterator, for example iter(f) returns f (unless f is closed). When a file is used as an iterator, typically in a for loop (for example, for line in f: print line), the next() method is called repeatedly. This method returns the next input line, or raises StopIteration when EOF is hit when the file is open for reading (behavior is undefined when the file is open for writing). In order to make a for loop the most efficient way of looping over the lines of a file (a very common operation), the next() method uses a hidden read-ahead buffer. As a consequence of using a read-ahead buffer, combining next() with other file methods (like readline()) does not work right. However, using seek() to reposition the file to an absolute position will flush the read-ahead buffer.

A hidden read-ahead buffer! next() is trying to be clever and trying to read more than just the next line, so it can pad out its buffer. However, Mednafen stops and waits for more input from Wallace Jr. after outputting a line, so next() waits for input that isn’t going to come until Wallace Jr. gets back to work. But that won’t happen until next() returns, which it won’t until it sees more input from Mednafen, even though it already has the line Wallace Jr. is asking for! Deadlock.

The fix is simple enough: use readline() to get the next line of the file instead of the nice syntactic sugar of the for loop, in order to avoid next()‘s read-ahead buffer:

while should_continue:
    controller_input = compute_next_input ()
 
    child.stdin.write (controller_input)
    child.stdin.flush ()
 
    should_continue = False
    line = child.stdout.readline ()
    while line != "":
        if line.startswith ("metrics"):
            should_continue = process_metrics (line)
            break
        line = child.stdout.readline ()

I would actually call next()‘s behavior here a bug. It’s perfectly reasonable for it to read past the end-of-line and buffer the excess, if there’s data after the newline. What’s happening internally is that the data next() got from the pipe ended with a newline, and it’s going ahead and trying to read from the pipe again just for the sake of filling its buffer. This actually decreases efficiency, since if next() isn’t going to be called again, it’s doing a read unnecessarily, at the cost of another system call. And if next() is going to be called again immediately, well, waiting to do the read until then doesn’t cost anything.

Of course, you wouldn’t notice the difference in practice, unless there’s no more data after the newline, but the file/pipe/whatever isn’t closed yet either. Which is exactly what happens in Wallace Jr.’s case.

The moral is: even if you know how tricky bidirectional interprocess communication is, it’s still trickier than you expect.

More on Wallace Jr.

To elaborate somewhat on my previous post, this is the input file I gave to Wallace Jr. in the run used to ultimately generate that video:

mednafen        /home/paul/mednafen-eval/instdir/bin/mednafen
 
buttons         right left up down b a
rom             /home/paul/roms/nes/Mega Man 2 (U).nes
initial_state   /home/paul/.mednafen/mcs/Mega Man 2 (U).0527a0ee512f69e08b8db6dc97964632.nc0
organism_size   1000
granularity     5
population_size 200
generations     50
parallelism    2
 
metric          health    0x06c0 1
metric          enemy     0x06c1 1
functions       megaman2.py
 
tournament_size             50
homologous_crossover_rate   40
arbitrary_crossover_rate    40
point_mutation_rate         10
reproduction_rate           10
 
point_mutation_probability  0.01

Note that the run used populations of 200 candidate solutions per generation. Each initial solution was 1,000 inputs long, with input granularity set to 5 frames — i.e., the same button input would be used for 5 consecutive frames (about 1/12 of a second) before advancing to the next. This was used to prevent solutions from being to spastic.

The various _rate settings specify how to generate new solutions from existing ones:

  • Homologous crossover takes one solution, deletes a randomly selected range within it, and replaces it with the corresponding range from another solution.
  • Arbitrary crossover is similar, but doesn’t require the two ranges to correspond to one another.
  • Point mutation randomly mutates each input with some small probability (here, 1%).
  • Reproduction just copies a solution unchanged from the previous generation.

When picking solutions for any of these techniques, Wallace Jr. uses a basic form of tournament selection. In tournament selection, you take a random subset of the population and pick the element with the highest fitness. This way, fitter solutions will get selected more frequently than less fit ones, but there’s still some randomness in the process. After all, if you always picked the fittest organism, you’d eliminate all the diversity in the population.

The metric lines specify addresses in game memory that store interesting values. Here, it’s the health of Mega Man and of the boss of the stage. Each is just one byte, which makes things easy. These values are actually ignored by Wallace Jr. itself; instead, my hacked copy of Mednafen uses these to decide what values to output after each frame.

For the actual fitness function, I decided to just specify those in Python instead of coming up with some way to express it in the configuration file. Here’s what megaman2.py has in it:

#! /usr/bin/env python
 
def compute_fitness (current_metrics, old_metrics, context):
    """
    Reward damage done to the enemy and health preserved.  Penalize runs that
    end before someone dies via a penalty for frames where both are still
    alive.  Also try to penalize activation of the door glitch by watching
    for increases in health, which indicate death-by-spikes.
    """
 
    def increased (key):
        return key in old_metrics and current_metrics[key] > old_metrics[key]
 
    fitness = current_metrics["health"] - current_metrics["enemy"]
    if "battle_started" not in context:
        fitness -= 500
    if current_metrics["health"] > 0 and current_metrics["enemy"] > 0:
        fitness -= 100
    if increased ("health"):
        fitness -= 500
    return fitness
 
def should_terminate (current_metrics, old_metrics, context):
    """
    Terminate once someone dies or the aforementioned sign of the door
    glitch is observed.
    """
 
    def dropped_to_zero (key):
        return current_metrics[key] == 0 and key in old_metrics and old_metrics[key] > 0
    def increased (key):
        return key in old_metrics and current_metrics[key] > old_metrics[key]
 
    if "battle_started" not in context and increased ("enemy") and current_metrics ["enemy"] == 28:
        context["battle_started"] = True
 
    return dropped_to_zero ("health") or dropped_to_zero ("enemy") or increased ("health")

The interface between Wallace Jr. and the fitness code can definitely stand to be improved. The functions are passed the current values of the metrics, the previous values of the metrics, and a context value it’s free to manipulate as it chooses to save information between calls. Most of the complexity is to handle the cases I mentioned in the previous post about avoiding undesirable outcomes like running out of input or activating the door glitch.

The astute reader will notice that I ran things out to 50 generations, but the video only shows examples up through Generation 13. Why? This:

Fitness of Wallace Jr. run

Things kind of hit a wall after Generation 13, with no real improvement any way you look at things. I suspect this was due to a combination of a fairly small generation size (200) and having 80% of each generation produced by crossover of existing solutions, with only 10% undergoing a handful of point mutations. Presumably, around the time Generation 13 was reached, the population had become a sort of inbred monoculture with little diversity between organisms, meaning it would take a long time for beneficial mutations to arise.

Sadly, I don’t have any way to go back and test this hypothesis, since the only organisms I saved were the peak performers of each generation. It may be worthwhile for later versions to compute Hamming distances or something like that to get a measure of how much diversity is in the population, and see how that changes over time.

One big limitation to Wallace Jr. currently is that it evolves input sequences, which are inherently going to be very specific to the initial state of the game when the solution starts playing it. In other words, if I played up to Air Man in a different playthrough and handed control off to the peak performer, it wouldn’t do nearly as well, since the state of the game wouldn’t match up with what it “expects”.

It’s like if you memorized the directions for getting to the store as a series of turns to make at each intersection — right, left, straight, right, right, etc. You’d do fine, unless I had you start a block or two over from your expected starting point. Suddenly, the directions are no longer correct.

What would be more interesting is to evolve programs to play the game, rather than mere input sequences. Of course, this would be a fair bit more difficult to implement, since now the solutions will need a way to observe the state of the game as part of their computation of what button input to produce. One way to do this would be to let it inspect the part of PPU memory where sprite information is stored, sort of letting it “read” what sprites are on the screen and where they’re at. The tricky part is getting Wallace Jr. and Mednafen to talk to each other like that while the emulation is going on — right now Wallace Jr. generates a Mednafen movie file and hands it off to Mednafen to play.

It’s not insurmountable, but having programs talk back and forth to each other is always a bit tricky, especially to avoid deadlocks: A is waiting for B to say something, and B is waiting for A to say something. Especially when B was never intended to talk to A to begin with.

If you’re interested in learning more about the theory behind genetic algorithms and genetic programming, you may want to take a look at A Field Guide to Genetic Programming, which is a freely downloadable book.

The return of Wallace!

For whatever reason, I recently got it in my head to take another stab at my long-since-abandoned effort to apply genetic algorithms to the utterly pointless task of playing old video games. Here’s the result:

The gruntwork seen above was done using Wallace Jr., which as its name suggests, is a significantly stripped-down version of Wallace, my original attempt from a couple years back. Wallace’s ultimate failing was trying to do a little bit of everything: being a graphical emulator front-end, reverse engineering memory locations, doing the genetic algorithm itself, video encoding, etc. As a result, it did none particularly well, and continuing problems trying to shoehorn GStreamer into the processing pipeline ultimately led to its abandonment.

Instead, Wallace Jr. just does the genetic algorithm stuff: generating organisms, evaluating them, and spawning new generations. That’s it. The emulation itself is passed off to a modified version of Mednafen — specifically, a version hacked to enable noninteractive use and to dump interesting memory values while executing. The actual reverse engineering bit is left to something else; it accepts a list of memory locations to use as metrics when evaluating fitness, but leaves it up to you to find a way to figure them out.

As a result, Wallace Jr. accomplishes its core mission without getting bogged down in other things. Also, by offloading emulation into a separate process, it has the nice side benefit of enabling parallelism, since for whatever reason emulator authors seem to love global variables for all important state, which pretty much eliminates any hope of emulating more than one thing at a time in the same memory space.

Wallace Jr. is still pretty basic, but it’s still complete enough to generate the results in the video, wherein a set of controller inputs capable of defeating Air Man in Mega Man 2 in just 11 generations. In that demonstration, fitness is basically defined as Mega Man’s health minus Air Man’s health, with the goal of maximizing the fitness measure.

I say “basically” because genetic algorithms, much like biological evolution, is prone to finding clever ways to solve problems, and the naive fitness measure I just described lets some of these slip through. In particular, if input ends before either Air Man or Mega Man dies, the fitness will still be the difference between their health meters. If input ends really early, like, say, before the battle starts when Air Man has nominally zero health, the outcome is the optimal fitness measure of 28, even though that solution really isn’t what we want. The real fitness measure I used checks for that, and throws in a -100 penalty if input ends before a death, and a -500 penalty on top of that if the battle hasn’t even begun yet.

More amusingly, seen in the “outtake” at the end of the video, in one of the runs I attempted while developing Wallace Jr., it managed to randomly hit upon controller input that activated the Air Man door glitch. I didn’t even know that was possible without using Item 1, but apparently it is. I had to adjust the fitness function to penalize that as well.

I hope you appreciate that video, since it took a ridiculous amount of effort to make. For starters, Mednafen doesn’t have a way to output a normal video file — the movies it makes are just recordings of controller input. Hacking in a way to dump the raw audio samples to a file was simple enough, but video proved to be more problematic; raw video data is too large to write to a file without making the emulator horrifically I/O-bound, disrupting emulation as it tries to frame-skip to recover. Integrating a true video encoder into Mednafen was more work than I wanted to do, so I took the extraordinarily hackish solution of adding a way to dump PNG screenshots every couple frames (since doing it every frame ran into the same I/O issues).

Luckily, MEncoder has a way to convert a series of PNGs into a more conventional video file, but that still left the issue of stitching them all together into a single movie. I managed to use PiTiVi for that, despite its best efforts to be a bug-ridden half-working pile of something you’d expect to find Mike Rowe knee-deep in. Sadly, the alternatives for non-linear video editing on Linux are even worse.

The interstitials were a pain to make, since PiTiVi doesn’t provide a way to add captions to video clips. For that, I wound up manually putting together a GStreamer stream via gst-launch-0.10 and the textoverlay element. The trick there was making sure to encode the resulting video stream in a format that GStreamer could then read back when used by PiTiVi. For some reason, GStreamer will happily encode to lossless MJPEG but has no idea how to decode it!

Once that was finally done, I switched back to MEncoder to add the background music. Yes, I know the last note gets cut off at the end, but I wasn’t about to wrestle with PiTiVi again to lengthen the video by a couple seconds, especially given that PiTiVi doesn’t let you save your project! The mind boggles.

Splicing in a separate audio track also nicely hides how my hack of dumping screenshots rapid-fire instead of making a proper video causes synchronization problems when trying to combine it back with the dumped audio. I could never get the framerate quite right, but without the original audio, it’s more difficult to notice in the video.

Plus, it’s an appropriate song, even if its central premise gets refuted by Generation 10.

Wallace: Behind the Scenes

Out of curiousity, how interested are people in reading about the technical details behind the Wallace rewrite? I could write up a couple posts about some of the technical challenges encountered and/or achievements made since the last update, but there’s not all that much in the way of stuff that can be shown off.

For example, does anyone want to hear about how user input is collected? Or maintaining audio/video synchronization and minimizing audio lag? Or performing caps negotiation with downstream elements in the processing pipeline? (I’ve certainly learned quite a bit on that last one over the past couple days.) There’s interesting stuff there for anyone who wants to know more about writing GStreamer elements or who is just curious about what kind of processing is needed to make Wallace work.

I’ll probably write about some of this stuff anyway, if only to have it out there and Googleable for anyone that’s interested. But if there’s not a whole lot of interest in that sort of thing, that’ll set the bar higher for deciding what’s blogworthy about Wallace development, especially given that there’s a non-trivial investment in time preparing a post about something like that.

GStreamer audio sinks are picky

If you ever get the urge to write your own GStreamer element, and one of the things your element will output is an audio stream, I have a word of advice that might save you a fair amount of frustration.

All of the typical sink elements that output an audio stream to a sound card (such as alsasink, esdsink, osssink, or even gconfaudiosink, which after all is probably just going to wrap one of the first three) require that your buffer offsets be measured according to the stream time (GST_FORMAT_TIME), and not any other way. It doesn’t matter if the caps you negotiated for your source pad clearly specified the sample size and rate, or even that you’re setting the timestamp for each of your buffers. If your offsets don’t use GST_FORMAT_TIME, it won’t work.

Specifically, you’ll wind up with this critical error message spit out by one of the base classes most of the audio sinks shipped with GStreamer inherit from:

GStreamer-CRITICAL **: gst_segment_clip: assertion `segment->format == format' failed

And that’s because that base class hard-codes GST_FORMAT_TIME as the segment format in its call to gst_segment_clip.

Note that the error message you get doesn’t clearly say that this is the reason it fails, unless you happen to install the debugging symbols for all the GStreamer libraries and plugins, and then go into a debugger to see what’s calling that function (after remembering to set the G_DEBUG environment variable to fatal_criticals so you get a core dump at the site of the problem), and then look up that part of the GStreamer source code to find out about the hard-coded use of GST_FORMAT_TIME.

Now, for those of you who notice what category this was posted under, yes, this means I’ve been tinkering with Wallace again. In particular, I’m completely rewriting how it uses GStreamer to output audio and video streams from the emulator into something useful (i.e., either playing the streams or encoding them to a video file). I’ve come to the conclusion that my old methodology was fundamentally completely wrong, and it’s a wonder I was able to make it work at all, kind of sort of.

This time, I’m having GStreamer drive execution of the emulator itself, instead of handling that on my own and trying to shovel its output into the pipeline, which makes scheduling and timing really really brittle. Now, each time GStreamer thinks the emulator should produce some more output, it will call (through the nesemulator element I’m writing) the emulator more or less directly. This ends up simplifying a good chunk of code, since GStreamer is (naturally) pretty good at figuring out when it needs more output data. At least, GStreamer is going to do a much better job than my abusing g_idle_add and nanosleep.

Also, instead of treating nesemulator as a source element, it will act as a decoder element. So what input stream does it take? Why, a sequence of button inputs, of course! GStreamer’s fakesrc can be coaxed into providing a null input for testing nesemulator quite nicely. Of course, I’ll need to write a new element to poll user input and convert that into the right button presses, but I was going to have to do that in one way or another anyway. Even better, this approach provides a cleaner way to implement playback of FCEU movie files: write another element that decodes a .fcm file into the corresponding input sequence, and stream that into the nesemulator element.

In fact, this approach can also be used to write an NSF decoder element, by wrapping the nesemulator element, discarding its video output, and controlling selection of which song to play. If you were ever longing to listen to 8-bit music in Rhythmbox, that’d let you do it.

For the moment, though, I just have a rough but functional implementation of nesemulator, which lets you do simple stuff like this:

gst-launch-0.10 fakesrc sizetype=2 sizemin=4 sizemax=4 filltype=2 ! nesemulator name=emu location=Mega\ Man\ 2\ \(U\).nes emu.video_src ! queue ! ffmpegcolorspace ! xvimagesink emu.audio_src ! queue ! audioconvert ! alsasink

You know, in case you get the urge to watch the intro to one of the greatest video games of all time.

What happened to Wallace anyway?

So it’s been several months since the last Wallace update. What’s been going on since then. Well, not much.

It may surprise you, but the main reason I sort of wandered away from it has been frustration with getting GStreamer to do what I wanted it to. I wanted to use GStreamer for the audio/video output since then I effectively get video encoding for free: it’s just a matter of building a pipeline that sends the output to a video file instead of the screen and speakers.

However, my skills at writing GStreamer elements are wanting at best, and the fact that the design I was going for — a real-time source that has separate audio and video sources — was a bit more advanced than what’s covered in the beginners’ documentation. Everything suddenly got to a point where sending the A/V to the speakers and screen would either hang the program or cause a very noticable amount of desync. I think I also managed to track part of it to a bug in GStreamer at the time, which presumably has since been fixed, but I never really went back to the code after throwing my hands up.

On the actual what-the-program-is-supposed-to-be-doing front, I’ve come to the conclusion that to do things “right” would require having some degree of debugging/disassembly features in Wallace, to allow you to pick apart the game and figure out what’s going on inside it. There’s some things — in particular, determining the player’s position within a level — that just can’t be done without getting at whatever internal state the game uses to keep track of it.

Of course, doing that would entail treating the emulator code (which I took from FCE Ultra) less like a black box. While the emulation itself is pretty solid, the source code to it, while not quite being WTF-worthy, is hardly the easiest thing to understand, especially if you aren’t intimitely familiar with the NES’s internals.

Reverse engineering a game is certainly possible. After all, ROM hacks aren’t hard to find — for example, look one post down — and can certainly require a good understanding of the game’s code and data to pull off. It’s just that that’s more effort that would be required to have some degree of reverse engineering support in Wallace.

And looking farther out, assuming everything else gets worked out, it all comes down to how much processing power you can throw at “growing” your “solution”. And unless someone suddenly donates me a 500 YHz processor, that’ll involve distributed computing, which to implement well would require (among other things) splitting the emulator core and the genetic algorithm into separate libraries, to minimize the dependencies on the distributed client code. Which wouldn’t be all that difficult in principle, as FCE Ultra already had a quasi-library-like interface, but it was geared towards code that got directly linked in, and the function calls went both ways. So, it’s just one more thing to clean up.

Plus, having gotten distracted by a shiny new thing to work on is only going to delay things further. I’d at least like to get its AI to the point where it can’t be beaten by whatever’s been growing in my old college dorm refridgerator for the past five years or so before running off to something else.

So, to answer your question Ryan, in theory, yes, Wallace could play it. In theory. In theory, communism works. In theory.

Comments Off

Wallace 0.99.1 Released

I finally got around to tinkering with Wallace, the tool for using genetic algorithms to play Nintendo games, again. The latest version of the software has several new features, a couple of which I’ll mention here.

The biggest is that your solutions can now take the form of a button input sequence, instead of being a program that runs in a VM and generates button input. This has the advantage of producing interesting results a lot faster, at the cost of your solutions being tied to your initial conditions. So, while you might be able to generate something that plays a particular Dr. Mario screen very well, it will crash and burn on any different starting condition — it’s just a dumb sequence of moves, after all.

There are many other, more subtle, improvements to the evolver as well. The selection phase now uses tournament selection to choose who lives on to the next generation, instead of just taking the top n solutions so far. This lets you vary selection pressure from brutal (top n) to nonexistent (creationists’ “evolution is entirely random” strawman) and anywhere in between. You can also now set the rate of mutation to a much wider range, in particular setting it to rates far lower than the (ridiculously high) minimum available in the previous version.

Also, the evolver prompts you to save your progress before it closes. I can’t tell you how many times the lack of that feature has caused me to lose a lot of data.

So, how do things shape up now? Here’s the statistics from a relatively quick sample run, on the same Dr. Mario screen as last time. Now, however, I’m using button input sequences with Select and Start filtered out (you can do that now too), with a population ranging from 200 to 2000 solutions, with fairly mild selection pressure (15%ish?), with a maximum mutation rate of 20 mutations per organism. The fitness function now measures final score, with penalties for the time intervals between each time you earn points. So, if you score 300 points at time 5 and 900 points at time 15, your fitness is 300 – 5 + 900 – (15 – 5) = 1185. (The ability to express that kind of fitness function is another new feature.)

So, here’s the stats (did I mention collecting these statistics is another new feature?):

Generation Maximum Minimum Mean Median
1 2369 -1 39 0
2 2369 -1 348 287
3 2969 -1 985 893
4 3272 -1 1405 1781
5 4170 -1 1518 2369
6 4181 -1 1947 2370

A few observations

  • A 100% randomly generated solution still managed to score 2400 points in the first generation.
  • By the sixth generation, at least half the solutions are better than the best solution in the first generation.
  • Each generation, the mean and median fitness increases.
  • It should be impossible for the fitness to be -1 with the fitness function I defined. There’s a bug somewhere.
  • Even so, the minimum shows no improvement from generation to generation; mutations can still render a good solution bad.

Of course, you want to see that top solution in the sixth generation, don’t you? Well, unlike the VM-based ones from last time, which ended up doing the same thing over and over, these solutions look like they’re being played by a monkey on crack who occasionally makes a really good move. Of course, since my lease doesn’t cover monkeys and none of my crack dealers are returning my calls, the monkey-on-crack theory is only a hypothesis.

Anyway, here’s the video of the top solution of generation six:

If you’re feeling adventurous, you can try your hand at Wallace yourself:

Download Wallace 0.99.1 (717 KB)

About Smegging Time (or, Wallace 0.99.0 is out!)

I’ll let the following link speak for itself:

Download Wallace 0.99.0 (source tarball, 713 KB)

OK, maybe I can say a little more.

I’ve got Wallace, everyone’s favorite NES emulator for Linux with a genetic programming tool bolted on, worked into a sufficiently functional state to let the outside world play with it. Thet there link will get you a copy of the source code, which you can do the usual “./configure, make, make install” thing on to get a working copy of it.

A couple caveats to keep in mind before diving in, though. First, consider this a tech demo, or pre-alpha software at best, not a polished, finished product. It’ll probably work OK if you don’t poke or prod it too hard, though if you want to crash it or send it into undefined behavior I can think of three ways off the top of my head to do so. Also, I make no guarantees that this release will be compatible with future ones, so don’t come crying to me if the evolver you’ve been running for weeks stops working when 0.99.1 (or whatever) comes out.

And of course, I’ve only tested this on my laptop running Debian unstable. Getting it to run on some other Linux system shouldn’t be too hard. With work you might even be able to get it working on some other platform GNOME has been ported to, but I wouldn’t bet too much on it.

Now that all that’s out of the way, what’s new since the last status update? Plenty! The VM that the candidate solutions run on is now register-based (loosely based on the MIPS architecture), so solutions only crash inside the VM if they try to divide by zero nowadays. You can also save your progress in the evolver tool, it executes as quickly as possible (instead of at normal emulation speed), and you can watch and encode candidate solutions at your leisure.

So what does it take to get a little genetic programming action started? Let’s say you wanted to generate something to play Dr. Mario (since score is much more closely correlated with progress there than in Super Mario Bros.). First, you tell the Evolver what ROM image to use and what the initial state will be — here, we pick a state slot that begins on Level 10 on High speed:

Game tab on the Evolver

Next, we define our fitness function by specifying a set of metrics to measure and how to modify them. Here, our fitness will be the score plus the number of seconds before losing. Note the number of conditions we look for for when to stop evaluating a solution: the virus count dropping to zero (win!), the Game Over screen (lose!), the Pause screen (programs don’t need breaks!), and the Main Menu screen (ever try hitting A+B+Select+Start?):

Fitness tab on the Evolver

Next, we specify how we’ll be breeding our solutions. The Evolver has several knobs you can play with to control population size and how to generate new solutions from old ones. Lots more are certainly possible. For example, it’d be nice to control how important the fitness score is to surviving to the next generation — currently, the top n solutions make it and the rest are culled, instead of doing some random-by-weight sort of selection. Here’s the somewhat arbitrary settings I chose:

Breeding tab on the Evolver

Now all that’s left to do is switch over to the last tab, hit that nice green Run button, and watch the magic happen.

Population tab on the Evolver

So, how well does this work in practice? I haven’t done much intensive experimentation, but early results are encouraging. In the first generation (which are all purely randomly generated), most of the solutions either crash before getting very far or don’t do anything, letting the pills just stack up in the middle until the reach the top. A couple of solutions do exhibit slightly more interesting behavior, such as this one, which rotates like mad and shoves pills to the left as fast as it can:

Solution from Generation 1

The top performer from Generation 1, with a fitness of 16. (660 KB, 16 seconds)

Over the first few generations, the solutions that crash quickly die off, and the ones that just let the pills fall in the middle flourish. Then the ones that shove the pills to one side or another, buying a few more precious seconds and thus increased fitness, dominate. And then, in Generation 6, an exciting development: the first solution to kill a virus!

Solution from Generation 6

The top performer from Generation 6, with a fitness of 309. (383 KB, 9 seconds)

To be fair, that screenshot makes the solution look like it’s playing far more shrewdly than it really is. The video tells the whole story: it’s just shoving pills to the right, but does it at just the right rate to line up the red sections of the first three pills with the red virus.

And if you think that’s neat, check out what happens in Generation 7:

Solution from Generation 7

The top performer from Generation 7, with a fitness of 916. (700 KB, 16 seconds)

That solution pushes pills to the right and rotates them at just the right speed to knock out two virus with the first three pills. It is interesting how the end result after three pills is the same as what I’d do if I were behind the controller. I can’t help but wonder how an Intelligent Design proponent (that is to say, a creationist) would distinguish whether I or a seventh-generation evolved computer program were playing, given that screenshot.

Anyway, the next couple generations don’t turn up anything with a higher fitness than that, and that’s the point where I stopped it so I could work on pushing out this release. Nevertheless, Generation 9 did produce the longest-lived solution so far, clocking in at almost double the time any of its competitors does, and still managing to kill a virus along the way.

Solution from Generation 9

The longest-lived performer from Generation 9, with a fitness of 329. (1.2 MB, 29 seconds)

Not too shabby for a couple hours of evolution on a 1.7 GHz Pentium M, eh?

After all that I bet you’re dying to try this out for yourself, so I’ll reiterate:

Download Wallace 0.99.0 (source tarball, 713 KB)

Attack of the Underflow

In principle, the last major piece of Wallace has landed: randomly generating solutions, evaluating them, throwing away bad ones, and generating new solutions through asexual (cloning and mutating) and sexual (using crossover) reproduction.

However, there’s two small problems. First, the stack-based VM is exceptionally prone to crashing. Second, the stack-based VM is exceptionally prone to crashing. Now I know technically speaking that’s only one problem, but I thought it was such a big one, it was worth mentioning twice.[0]

The implementation of the VM, of course, is fine. (At least, there aren’t any known bugs in it.) Rather, the programs that are running within the VM have a tendancy to perform an illegal operation and crash. There are four ways in which a program can crash inside the VM, most common first:

  • Stack underflow: It tries to pop a value off the stack, but the stack is empty.
  • Reserved opcode: It tries to execute an opcode that has no operation associated with it.
  • Stack overflow: It tries to push a new value onto the stack, but the stack is already full.
  • Divide by zero: Because you can’t.

Reserved opcodes could be eliminated by assigning them to something, but the stack underflows are quite problematic. Out of 89 operations the VM supports, only 6 of those don’t try to pop anything off the stack. The vast majority of operations not only need to pop their operands, but they also leave the stack with fewer values on it than it originally had. So naturally, when you’re randomly generating programs to run in the VM’s instruction set, most of the time they’ll trigger a stack underflow and crash.

In fact, the vast majority of programs crash on their very first instruction. Furthermore, in a trial using a population size between 100 and 500 candidate solutions, measuring fitness solely by how many instructions got executed before a crash, even after 50 generations the typical candidate solution still only managed to successfully execute 2 instructions before crashing.

Granted, that’s up from a median of 0, but it’s still not very encouraging. And to keep this in perspective, you’ll obviously want these programs executing more than one instruction per emulated frame, and there’s slightly over 60 such frames per second. Programs that die after only a handful of instructions aren’t going to get anywhere.

(As a side note, however, one candidate solution — one of the initial randomly-generated ones — actually managed to hit the 6000-instruction timeout I arbitrarily set! I wasn’t able to peek into it and see what it was doing, but I strongly suspect it managed to jump into the “uninitialized” portion of memory, which would be filled with 0 bytes, all of which map to nop. Whatever it did, that “gene” never got passed on to anyone else in the population.)

I’m forced to conclude that, despite the stack-based VM being attractive from an implementing-the-VM perspective, it’s quite unsuitable when throwing random code at it. A register-based VM would be infinitely more robust, since the registers the instructions operate on will always be there; just the values stored inside them would change. Alas, this change involves replacing just about everything in the VM except for the mutation engine, which is a large chunk of tedious (but relatively straightforward) coding. But unfortunately, it must be done.

Footnotes

[0] To steal a line from my laptop’s namesake.

LIFO the Party

Time for your irregularly scheduled Wallace update.

As you’ll recall, there are two key things you need for genetic programming (or, more generally, any evolution-like process): chunks of information that gets replicated imperfectly, and a metric for evaluating each chunk’s fitness; the less-fit chunks are discarded, and the more-fit chunks go through the process again.

Well, we now have a data representation for our “solutions” to the “playing Super Mario Bros.” problem. While a simple sequence of button presses could’ve worked, it feels like we’re working with recordings rather than organisms. (Actually, we would be; an FCEU movie file is little more than a compressed sequence of button presses.) So if we want better verisimilitude with biological evolution, we want our candidate solutions to be more “alive” than that.

So, why not make our solutions computer programs themselves?

Before going into too great of detail how I’ve gone about doing this, here’s an example of a very simple program playing Super Mario Bros., and doing surprisingly well considering it was my first attempt. (There’s no genetic programming going on here; the program playing in the video below was hand-coded.)

Mario v. a quick-and-dirty program to try to play it

Ogg Theora video, 938 KB, 37 seconds. Ogg codecs for Windows might be found here.

Over the weekend I’ve implemented a quick-and-dirty virtual machine in which you can write programs that try to play Nintendo games. The VM provides some key services:

  • A simple instruction set similar to machine code to build the programs out of.
  • A means for sending button presses to the emulator.
  • A means for reading the game’s memory.
  • A sandbox to prevent a misbehaving program from doing any damage to the actual system.

Unlike the instruction set probably used by the computer you’re sitting in front of right now, the VM is a stack-based (as opposed to a register-based) machine. There’s a stack that just about every instruction uses; each instruction pops its arguments off the stack, does its work, and pushes the results onto the stack.

This kind of architecture saves us from having to worry about what addressing modes we need for each instruction; everything simply uses the stack. Granted, since the VM I’ve written can work with both 8-bit and 16-bit values (the NES uses an 8-bit processor, but we need 16 bits to easily address all memory locations), the instructions do worry a little about the types of data on the stack. Of course, it would be possible to store type information in the stack as well, but then the instructions all need to be clever about how they handle each possible data type, which makes implementing the VM more error-prone.

Anyway, I’m sure you’re dying to see what program generated that video that manages to get all the way to the halfway point of World 1-1. Well, here it is, converted to an assembly language-style format:

start:     push_i8u 0x82       ; RIGHT + B buttons
           pop_r               ; write to output register
mainloop:  push_i8u 20         ; will loop for 20 vblank periods
waitloop:  pause               ; wait for next vblank period
           push_i8u 1          ; decrement our counter ...
           sub_i8u             ;   ... by one
           dup                 ; push a copy of the result
           push_i16 waitloop   ; if counter isn't zero ...
           cja                 ;   ... go back to waitloop
           pop                 ; throw away counter
           push_r              ; get our current output
           push_i8u 0x01       ; A button ...
           xor_8               ;   ... gets toggled
           pop_r               ; write to output register
           push_i16 mainloop   ; and go back to top ...
           ja                  ;   ... of mainloop

OK, so I lied; all instructions except the push immediate instructions get their arguments from the stack.

Anyway, as you can see from the comments, all this program does is hold the Right and B buttons constantly, and every 20 emulation frames (about every 1/3 of a second) it either presses or releases the A button. As a result, Mario constantly runs to the right and jumps periodically. It’s surprising how far this 16-line dumb-as-a-post program gets on its second life!

So now all that’s left is a way to generate new programs for the VM, making them reproduce and mutate, and pitting them against each other in a battle-to-the-death.

Crossing the Streams

The latest update to Wallace, despite the delay since the last installment, doesn’t actually add any features directly related to its ultimate goal of using genetic algorithms to evolve something that can play Super Mario Bros.

Instead, it ports the audio/video code from SDL and drawing-pixmaps-on-the-window, respectively, to the GStreamer framework.

One of the neat things about GStreamer’s design is that it uses pipelines of elements to process multimedia streams, sort of like how you’d put together a bunch of simple programs on a Unix command line. Under this model, doing different things with the data just involves using different elements to process it. For example, taking the emulator’s output and sending it to the screen and speakers involves this pipeline:

Output pipeline

Walemulatorsrc is a custom-made element that acts as a source for the emulator’s output audio and video streams. The two queues act as buffers. Gconfaudiosink and gconfvideosink output the audio and video streams, respectively, using the user’s preferred output methods. This, for example, could let us render to video memory directly. But since the emulator has a very specific format for its audio and video streams, which may not match what those two sink elements accept, the streams are first passed through filters that can convert between different audio (audioconvert) and video (ffmpegcolorspace) formats.

The advantages of using a pipelined series of elements like this is what happens if, say, instead of drawing to the screen and playing to the speakers, you want to encode a video file instead? Well:

Encoder pipeline

Now after converting the streams to a convenient format, we run the audio stream through a Vorbis encoder, run the video stream through a Theora encoder, multiplex the two streams into an Ogg media file, and write the result to a file. Of course, there’s no reason you couldn’t generate a video file in whatever your favorite format is, either; it all depends on what elements you plug into the pipeline.

And as proof that this works, you can watch me beat up Metal Man from Mega Man 2:

Mega Man v. Metal Man
Ogg Theora video + Vorbis audio, 2.8 MB, 40 seconds. Windows users might be able to find the necessary codecs here if they don’t have them already, I guess.

Also, using GStreamer elements to render video on-screen scales the video to fill the window for free, which is a nice side benefit.

As nifty as all this is, and as simple as it seems in principle, implementing it turned out to be trickier than I expected. The main source of headaches was the need to implement that walemulatorsrc element myself. No prior experience writing elements + non-trivial behavior (two source pads, fed asynchronously by the emulation core) = lots of aggravation. I’m sure anyone who groks GStreamer would shudder at the code inside that element, which, let’s be honest, has a fair amount of the proverbial duct tape and baling wire.

Also, I had expected outputting the video via GStreamer to be faster than drawing a pixmap image onto the window about 60 times a second, but it turns out the GStreamer solution takes more CPU time than the “ugly and wrong” way. A little tentative profiling with Sysprof suggests that 20% of the CPU time is being spent inside that ffmpegcolorspace element, converting the 8-bit RGB bitmaps generated by the emulation core into the YUV color space that elements like gconfvideosink and theoraenc seem to want. It may be worthwhile in the future to eschew ffmpegcolorspace and do the conversion within walemulatorsrc, but you know what they say about premature optimization.

I promise, the next status update will actually further Wallace along to its ultimate goal. I can’t wait to see how what I have planned next stacks up.

[Editor's note: your homework exercise is to figure out why that last sentence is a combination clue and pun.]

Inside the Mushroom Kingdom

I spent a ridiculous amount of time this weekend hacking on Wallace. Of course, a lot of that was due to a near-total overhaul of the earlier code, refactoring thousands of lines to make the way forward easier (since I now have a decent mental picture of what I need, instead of aping the existing FCEU code.)

Of course, that doesn’t mean refactoring was the only change since the last update. As you’ll recall, I had been using screen scraping to get information about the current state (things like score, number of lives, etc.), but along the way found it was way more complicated than I had anticipated. Plus, even Super Mario Bros. has pathological cases where screen scraping is more problematic than recognizing ten digits, as anyone who’s ever gotten more than nine extra lives can tell you.

So, screen scraping is dead and buried. (Using image recognition to detect events, however, is still there, since it’s easy to implement, easy to understand, and widely applicable.) Luckily, there’s a much richer resource for getting game information: the game’s memory itself. Naturally, in-game memory has all the information we’re likely to need, assuming we can find it, and the FCEU core’s debugging hooks lets us access it easily from code.

Ah, but how to find it? The FCEU core comes to the rescue again; part of its cheating interface lets you search game memory for interesting values. (As you may know, cheating devices like the Game Genie work by replacing values in memory; an “infinite lives” cheat, for example, just returns a constant value whenever the game looks to see how many lives are left.) Since FCEU is happy to do the heavy lifting, Wallace just has to provide a graphical interface to it.

To get an idea of how this works, let’s figure out where in memory Super Mario Bros. keeps track of the number of coins we’ve collected. We start by resetting the search, which saves a snapshot of the current contents of memory:

Before

In this example, we currently have 3 coins, and we’re about to collect 19 more. After we do so, we search for all bytes in memory that were originally “3″ but are now “22″.

After

We lucked out in this example: there’s only one byte in the address space that changed in the way we were looking for. In most cases, there’ll be some false positives, but they can be screened out by doing another search on the results.

Also, we assumed that the coin total is stored as a single byte in memory. More pathological situations are also possible, of course. For example, since the score is a six-digit number, there’s no way it can fit into a single byte (which can range only from 0 to 255), so it must be spread across multiple, presumably contiguous, bytes. Additionally, the data might not even be laid out like you’d expect a multi-byte integer to be; for example, the score in Super Mario Bros. is stored in six bytes, one byte per digit, starting with the millions digit (which only appears on screen if your score gets that high) and leaving off the ones digit (which is always 0 anyway). And, of course, endianness is also potentially an issue.

Of course, with a little trial and error, we can use the Memory Browser to find where the data we want is stored, and then extract it when we need it:

Score

Since I’m sure you’re dying to know, here’s where some interesting values can be found in Super Mario Bros.’s memory:

Data Address Format Notes
Current Score 2013 6 bytes, base 10, big-endian Multiply this by 10 to get the score
High Score 2007 6 bytes, base 10, big-endian Likewise
Coins 1886 1 byte
World 1887 1 byte Zero-based (i.e., World 1 is stored as 0)
Stage 1884 1 byte Likewise
Time Remaining 2040 3 bytes, base 10, big-endian
Lives 1882 1 byte Zero-based

So, what can we do with this? We can use the Evolver window to evaluate the fitness of an input. First, we specify our fitness function on the Fitness tab. This is mostly things we’ve seen before, just with a somewhat nicer interface. However, now a weight is associated with each metric; overall fitness is a weighted sum of all metrics.

Fitness

Next, we specify the initial conditions for our run in the Game tab:

Game

The Breeding tab will eventually be where we specify how to generate new candidate solutions in our genetic algorithm, but for the time being there’s nothing there. All we can do is do a manual test of our setup on the Population tab, where the human plays and the Evolver evaluates how well he or she does:

Playing

You’ll note that our fitness function weights world and stage much more heavily than score, as can be seen in the computed fitness value. There’s also a very heavy penalty for game over:

Over

So now, the main missing piece is the genetic algorithm.

Two Steps Forward, One Step Back

This time, the screenshot comes first. Check it out:

Super Mario Bros and the Evaluator window

That’s right, Wallace can now screen scrape games while they’re being played in order to determine how well the “player” is doing. Sort of.

Here’s what the configuration for this looks like:

Screen scraping the score from Super Mario Bros

Intuitively, we need two pieces of information in order to screen scrape: the part of the screen where the data is, and how to convert the raw pixels into a number.

The first part of that is easy enough; load a screen dump into the viewer and draw a rectangle around the part of the screen you want. To make things easier, the current code assumes each character will be on an 8-pixel boundary, so the box “snaps” along the grid lines accordingly. (More on this later….)

The second part is a bit trickier, as we need to sort of do OCR on the image to figure out what characters are there. Luckily, part of NES video memory is the “pattern table”, where the currently loaded patterns are stored. A pattern is simply an 8×8 bitmap with a color depth of 2 bits. NES games use the pattern table as the building block for backgrounds and sprites. For example, here’s the pattern table for Super Mario Bros:

Pattern table from Super Mario Bros

The colors are arbitrary — the actual color you see during the game is a result of combining the pattern’s two color bits with two additional color bits associated with the particular part of the screen it’s being drawn at. If you look carefully, you can see that the first set of patterns are pieces of Mario sprites.

Of course, what we’re really interested in are the patterns for each number, as those tell us what each digit will look like (modulo palette rotation) on the screen. Once we assign each digit to its corresponding pattern (assuming a base-10 number system for the time being), converting a part of the screen to a numerical value is relatively easy.

“Relatively” being the operative word here, unfortunately. For starters, I’ve only been able to figure out how to access the current contents of the pattern table from the FCEU store. For Super Mario Bros this is good enough, since it never changes. However, games with more graphics data will swap different pattern data in and out — for example, the contents of the pattern table in Mega Man 2 change depending on which stage you’re in. I think this might be done with the different mappers in NES games, but the FCEU code here is particularly inscrutable, so I haven’t figured out any way to access the “other” pattern tables without playing to the point where they’re loaded.

Also, remember how I mentioned the code assumes that scores and other numbers you want to screen-scrape occur on 8-pixel boundaries? If you look very carefully at the screen shots above, it looks like that’s the case, doesn’t it? After all, the upper-left corner of the score display starts at pixel (24, 24) (where pixel (0, 0) is the upper-left corner), right?

Well, no. Look very carefully at the patterns for each digit. The digits themselves are only 7 pixels wide, so one column of pixels on each is “blank”. This margin is on the left side of each digit.

In other words, the score actually starts at pixel (23, 24).

The code currently hacks around this by adjusting all your screen scraping bounding boxes by one pixel to the left. Which, of course, breaks pretty much any other game you try.

Plus, if you want to scrape multiple values, you need to set up an action for each separately, which is very much a pain.

In other words, the code is functional, but the interface has got to go. While the “one action per metric adjustment” sounded nice back when I made that design decision, it’d be much nicer to take one screen dump and specify all the parts of the screen we want.

Additionally, since we need pixel-perfect precision for specifying what parts of the screen should be scraped, it’d be really nice to have the program figure out the bounding rectangles for us. As a one-time sort of thing when defining an action, it could take the digit-to-pattern mapping we specify and scan the entire screen to deduce where the digits appear, assemble them into rectangles, and let those rectangles be mapped to metrics.

Just another example of not learning design-relevant issues until actually writing code. In your face, waterfall model!