Happstack and Streaming: Part 5: Modifying Happstack

Modifying Happstack

So now that we’ve established that changes to Happstack are needed to support streaming, what should those changes look like? Abstractly, there needs to be a way to give it a series of chunks that it can send to the browser individually via chunked transfer encoding, flushing the network buffer after each one to make sure they’re sent promptly.

The most obvious way to do this is to add a third data constructor to the Response data type, for use with streams. The main difference would be that, instead of accepting a single ByteString as the response to the browser, it would somehow get ahold of several.

What would that “somehow” be, though? Looking back to our original motivation (the real-time multiplayer game being played in a web browser), the entire contents of the stream aren’t known at the time it starts, and its contents will be determined by the actions of the various players — i.e., things happening in the IO monad. This means a simple list of ByteStrings isn’t the way to go. Although we did demonstrate such a list could be built anyway using some trickery, ideally we’d want something a bit more elegant, or at least one that doesn’t require actively subverting the type system via unsafeInterleaveIO.

Therefore we’d want to put an IO action in the response that could be used to generate new ByteStrings on demand for each chunk. The simplest way would be to use an IO ByteString:

data Response = {- Response and SendFile data constructors... -}
              | Chunked { rsCode      :: Int,
                          rsHeaders   :: Headers,
                          rsFlags     :: RsFlags,
                          rsValidator :: Maybe (Response -> IO Response),
                          chGenerator :: IO L.ByteString
                        }

Happstack could execute that action repeatedly to generate successive chunks, and we could even denote end-of-stream by having it produce an empty ByteString, which neatly parallels how chunked transfer encoding works.

An alternative would be something that includes an explicit state parameter that gets chained from one call to the IO action to the next. Without something like that, it would be awkward (albeit not impossible) for the action to keep track of where it’s at in the stream and what it should generate next.

data Response a = {- Response and SendFile data constructors... -}
                | Chunked { rsCode         :: Int,
                            rsHeaders      :: Headers,
                            rsFlags        :: RsFlags,
                            rsValidator    :: Maybe (Response a -> IO (Response a)),
                            chGenerator    :: a -> (a, IO L.ByteString),
                            chInitialState :: a
                          }

Unfortunately, that changes the kind of Response from * to * -> * in order to account for the new type parameter a representing whatever state the stream wants to keep track of. Since Response is used throughout Happstack and programs built on it, breaking API compatibility like that really ought to be avoided if we can help it, especially when it’s just for the sake of what would be an infrequently used feature.

Instead, what if we turn things around and put the generator in the driver’s seat: pass it an IO action to call whenever it has a new chunk ready to be sent:

data Response = {- Response and SendFile data constructors... -}
              | Chunked { rsCode         :: Int,
                          rsHeaders      :: Headers,
                          rsFlags        :: RsFlags,
                          rsValidator    :: Maybe (Response -> IO Response),
                          chGenerator    :: (L.ByteString -> IO ()) -> IO ()
                        }

Happstack would invoke chGenerator with a function that handles writing a chunk to the network and doing whatever it needs to do when the stream is over. The last thing chGenerator would do is call that function with an empty ByteString to signal end-of-stream. chGenerator would be responsible for chaining any state information from one step to the next. It would actually look rather like the pipe example from earlier; the function provided by Happstack would be used in place of hPrint and hClose, but other than that it’s the same basic idea.

There’s still the issue of signaling to the generator when it should stop because the network connection closed. But hey, we’ve got a perfectly good return value from the Happstack-provided function that we’re not using. Let’s use it:

data Response = {- Response and SendFile data constructors... -}
              | Chunked { rsCode         :: Int,
                          rsHeaders      :: Headers,
                          rsFlags        :: RsFlags,
                          rsValidator    :: Maybe (Response -> IO Response),
                          chGenerator    :: (L.ByteString -> IO Bool) -> IO ()
                        }

The Happstack-provided function returns True if more data should be generated, or False if it should be aborted.

That ought to work pretty well. It addresses all the problems identified with our attempts to stream without changing Happstack. The use of Chunked as the data constructor for the Response object will tell Happstack to suppress the Content-Length header, use chunked transfer encoding, and flush the network buffer after each chunk. New data to stream is only generated as needed, without using additional threads or large buffers. API compatibility with existing code is preserved; we’re adding a new interface and leaving existing ones untouched. Even better, there’s no need to use any trickery to achieve lazy IO; with Happstack’s cooperation, the usual kind of IO works just fine.

Mind you, I haven’t written a patch that implements this proposal. It’s just an idea. At the very least, I’d want to have one realistic application (i.e., not a simple proof-of-concept like from earlier) that demonstrates how it can be used and to verify that it does indeed work correctly before actually submitting a patch. After all, as any good programmer can tell you, sometimes flaws in a design don’t become apparent until you actually try to implement them. But it seems like this ought to work.

Now I just have to get around to writing the application whose idea motivated me to investigate the feasibility of streaming data from Happstack to begin with.

Happstack and Streaming: Part 4: The Flaw

The Fatal Flaw

All three approaches to generating a lazy ByteString from the IO monad actually do work, as you can verify by loading the source code into ghci and invoking them manually. However, if you go through the web server and visit one of the finite stream paths, no output will appear until the stream has finished being generated, at which point the entire set of output from the server will arrive all at once, like so:

paul@queeg:~/tmp$ GET -es http://localhost:8000/pipe/limited

(nothing happens for a while, and then…)
200 OK
Connection: close
Date: Mon, 18 Jan 2010 13:30:55 GMT
Server: Happstack/0.4.1
Content-Type: text/html; charset=utf-8
Client-Date: Mon, 18 Jan 2010 13:30:58 GMT
Client-Peer: 127.0.0.1:8000
Client-Response-Num: 1
 
2010-01-18 13:30:55.342444 UTC
2010-01-18 13:30:55.443235 UTC
2010-01-18 13:30:55.544115 UTC
2010-01-18 13:30:55.644934 UTC
2010-01-18 13:30:55.745514 UTC
2010-01-18 13:30:55.846283 UTC
2010-01-18 13:30:55.947581 UTC
2010-01-18 13:30:56.048834 UTC
2010-01-18 13:30:56.150068 UTC
2010-01-18 13:30:56.251281 UTC
2010-01-18 13:30:56.352521 UTC
2010-01-18 13:30:56.454423 UTC
2010-01-18 13:30:56.555816 UTC
2010-01-18 13:30:56.657179 UTC
2010-01-18 13:30:56.758547 UTC
2010-01-18 13:30:56.859939 UTC
2010-01-18 13:30:56.961296 UTC
2010-01-18 13:30:57.062581 UTC
2010-01-18 13:30:57.163847 UTC
2010-01-18 13:30:57.265212 UTC
2010-01-18 13:30:57.366438 UTC
2010-01-18 13:30:57.4677 UTC
2010-01-18 13:30:57.569059 UTC
2010-01-18 13:30:57.670414 UTC
2010-01-18 13:30:57.772045 UTC
2010-01-18 13:30:57.873404 UTC
2010-01-18 13:30:57.974761 UTC
2010-01-18 13:30:58.076117 UTC
2010-01-18 13:30:58.177485 UTC
2010-01-18 13:30:58.278847 UTC

The point where the delay occurs reveals what’s going on — not even the headers are getting sent out until the entire response has been generated. That’s not Happstack’s doing; it’s the buffering happening inside the networking library. In the absence of any command to send data out immediately, it’s going to wait until it has a large chunk it can send out immediately. Sending one large packet instead of lots of small packets makes more efficient use of network bandwidth, since each packet carries its own overhead. And since Happstack wasn’t written with streaming in mind, it doesn’t flush the buffer until it’s written out the complete response.

As further evidence, the infinite streams do stream, kind of. Once the buffer fills up, the networking library will push it out to make room for more data. As a result, nothing will arrive for a while, then all of a sudden lots of data will arrive, then another long pause as the buffer fills back up, then another big chunk of data, and so on.

This alone shows why, despite our best efforts at cleverly creating the response, it’s all for nothing unless we can control the buffering behavior down in the network library, which Happstack doesn’t provide any access to. The only exception would be if we’re trying to stream data quickly enough to rapidly fill up the buffer, but since there’s also no way to control the size of the buffer, that “solution” isn’t reliable, and certainly not applicable if we’re only trying to stream a relative trickle of information.

Buffering Strikes Back

Buffering introduces additional problems that, while they don’t kill the solution outright, adds some significant inefficiencies. These are easiest to see with the infinite streams, which continue until the browser closes the connection. (They’d also arise any time the browser closes a finite stream before receiving all the data.)

First, Happstack only detects that the connection to the browser has been closed once the network library tries to send out data and returns an error. Between the time when the connection actually closes (i.e. when the browser sends a TCP FIN packet) and the time when Happstack notices, the program continues generating new data to send. All this effort is wasted, since the data will never get sent out and will be thrown away. The app server winds up doing a lot of useless work as a result.

Being able to control network-level buffering would largely deal with this problem too: since a closed connection is detected when trying to send data, sending each chunk out immediately would allow the app server to stop generating the stream much more promptly. If that were the case, the approach of manually using unsafeInterleaveIO, despite being the most difficult of the three, would work fairly well. The other two, however, have their own buffering problems, independent of what’s happening at the network level.

For example, what is a pipe but a buffer being managed by the operating system? Since a separate thread is writing data to the pipe independently of the thread reading from it, the generation thread will keep on going even if the network connection closes, until the pipe fills up. In theory the OS should cause writes to fail as soon as the read end of the pipe is closed, but using lazy IO to read from it seems to keep this from happening promptly. The generation thread will keep writing more data to the pipe until it too gets a write error and stops.

Using channels is even worse. The network buffer and the pipe at least have the benefit of being of finite size; once they fill up, further attempts to write will block until something reads the data back out or an error is detected and the buffer is destroyed. Channels, however, are unbounded. They never fill up; they just keep growing to make room for new data as it’s written. As a result, in the event of the browser closing the connection prematurely, the size of the channel will grow and grow until the thread writing data to it decides to stop. This is a problem for the infinite streams, since they never stop; eventually the channel will grow to consume all available memory on the system until the OS kills the app server entirely. This is also a problem for finite streams, of course, since those channels won’t get thrown away until they grow to the size of the unconsumed portion of the stream, which would be a big problem if the app server is generating lots of streams.

Playing Nice

But even if all the buffering problems can be dealt with, our solution still is far less than ideal. While the idea of slowly trickling out the stream’s data as it becomes available is legal according to the definition of HTTP, it’s really not the proper way to go about it.

Remember how we had to suppress the Content-Length header? Browsers use that header to know when they can stop reading the response from the server. Without it, the only way they can tell they’ve received all the data is when the server closes the connection. Leaving the connection open has the advantage that the browser can re-use it for the next request it sends to the server, instead of creating a new connection. Establishing a new connection requires doing the TCP three-way handshake again, which involves a round trip to the server that doesn’t carry any data. Being able to reuse the connection is faster since this extra round trip is eliminated. It might not seem like much, but consider a web page that has lots of small graphics on it; without being able to reuse a connection, a new round trip is needed every time the browser tries to download another image. All those little delays add up.

It turns out HTTP does have a way to stream data while still telling the browser how much data to expect: chunked transfer encoding. Basically, the server’s response gets split up into separate chunks, and each chunks carries its own length information. The end of the stream is indicated by a zero-length chunk. With chunked transfer encoding, the browser knows when it’s finished receiving the data, even though the server doesn’t necessarily know how much data will be sent beforehand.

Chunked transfer encoding is what we’d want the server to be able to do. Of course, Happstack would need to be modified to support it, since it too needs to know when the stream has ended so it can reuse the connection for the next request from the browser.

In Part 5, we’ll look at just what sort of modifications we might try to make.

Comments Off

Happstack and Streaming: Part 3: Implementation (Sort Of)

Implementation

For our proof-of-concept for trying to do streaming with Happstack, here’s a simple web application that implements each of the three possible approaches discussed earlier. To keep things simple, the data we’ll be streaming is a series of timestamps taken from the system clock at regular intervals. Not a particularly useful application, but it is something simple that uses IO, which is all we’re looking for here. To make things a bit more interesting, the web app will support each approach two different ways: first, streaming a finite number of timestamp values before ending the stream, and second, streaming an endless series of timestamp values until the browser closes the connection.

Specifically, the web app will support the following six paths:

/pipe/limited
A finite number of timestamps, using a OS-level pipe.
/pipe/infinite
An infinite number of timestamps, using an OS-level pipe.
/chan/limited
A finite number of timestamps, using a Chan.
/chan/infinite
An infinite number of timestamps, using a Chan.
/manual/limited
A finite number of timestamps, using unsafeInterleaveIO manually.
/manual/infinite
An infinite number of timestamps, using unsafeInterleaveIO manually.

First, let’s get all the module imports out of the way. There’s nothing particularly interesting about any of them, so I won’t comment on them further.

{-# LANGUAGE FlexibleContexts #-}
 
module Main (main) where
 
import Control.Concurrent (forkIO, threadDelay)
import Control.Concurrent.Chan (Chan, getChanContents, newChan, writeChan)
import Control.Monad (liftM, MonadPlus, msum, when)
import Control.Monad.Trans (liftIO, MonadIO)
import Data.Time.Clock (diffUTCTime, getCurrentTime)
import Happstack.Server.HTTP.Types (Method (..), noContentLength, nullConf, Response, resultBS)
import Happstack.Server.SimpleHTTP (dir, FilterMonad, internalServerError, methodSP, nullDir,
                                    ServerMonad, simpleHTTP, toResponse)
import System.IO (Handle, hClose, hFlush, hPrint, hPutStrLn, stderr)
import System.IO.Unsafe (unsafeInterleaveIO)
import System.Posix.IO (createPipe, fdToHandle)
 
import qualified Data.ByteString.Char8 as S
import qualified Data.ByteString.Lazy.Char8 as L

Next is the code that sets up the Happstack server with the six paths mentioned above.

main :: IO ()
main = simpleHTTP nullConf root
 
root :: (ServerMonad m, MonadPlus m, MonadIO m, FilterMonad Response m) => m Response
root = msum [ dir "pipe"   $ subdir outputPipe
            , dir "chan"   $ subdir outputChan
            , dir "manual" $ subdir outputManual
            ]
 
subdir :: (ServerMonad m, MonadPlus m, MonadIO m) => ((Int -> Int) -> IO Response) -> m Response
subdir output = msum [ dir "limited"  $ nullDir >> methodSP GET (liftIO $ output decr)
                     , dir "infinite" $ nullDir >> methodSP GET (liftIO $ output id)
                     ]
    where decr n = n - 1

To support both finite and infinite streams, each stream generator takes an initial counter and a decrement function. After generating each timestamp value, it applies the decrement function to the counter, and stops if the counter reaches zero. For finite streams, the decrement function subtracts one from the counter. For infinite streams, it does nothing, so that the counter never reaches zero.

Yeah, it’s kind of an ugly hack, but it’s good enough for this.

Here’s the code for generating a stream using OS-level pipes:

outputPipe :: (Int -> Int) -> IO Response
outputPipe decr = do h <- pipeClock decr limitedCount
                     bs <- L.hGetContents h
                     return $ streamBS bs
 
pipeClock :: (Int -> Int) -> Int -> IO Handle
pipeClock decr n = do (readFd, writeFd) <- createPipe
                      writeH <- fdToHandle writeFd
                      forkIO $ output writeH n `catch` abort writeH
                      fdToHandle readFd
    where output h 0 = do hPutStrLn stderr "closing pipe"
                          hClose h
          output h n = do now <- getCurrentTime
                          hPrint h now
                          hFlush h
                          tick now
                          threadDelay interval
                          output h (decr n)
          abort h e = do hPutStrLn stderr $ "caught error: " ++ show e
                         hClose h

The above code creates a pipe. A new thread is forked off to repeatedly write timestamps into the pipe. For a finite stream, the new thread closes its end of the pipe when it’s done writing. Meanwhile, the original thread uses lazy IO to read the entire contents of the pipe into a lazy ByteString, which gets passed back down to Happstack for sending to the browser.

Here’s the code for generating a stream using an IO channel:

outputChan :: (Int -> Int) -> IO Response
outputChan decr = do ch <- chanClock decr limitedCount
                     chunks <- getChanContents ch
                     let bs = L.fromChunks $ takeWhile (not . S.null) chunks
                     return $ streamBS bs
 
chanClock :: (Int -> Int) -> Int -> IO (Chan S.ByteString)
chanClock decr n = do ch <- newChan
                      forkIO $ output ch n
                      return ch
    where output ch 0 = do hPutStrLn stderr "done writing to channel"
                           writeChan ch S.empty
          output ch n = do now <- getCurrentTime
                           writeChan ch $ chunkify now
                           tick now
                           threadDelay interval
                           output ch (decr n)

The above code does the same basic thing, just with a channel of strict ByteStrings instead of a pipe. A new thread is forked off to write timestamp values to the channel. For a finite stream, the new thread writes an empty ByteString to indicate that it’s done. Meanwhile, the orginal thread uses lazy IO to read the entire contents of the channel into a (lazy) list, up until an empty ByteString. It then lazily combines the individual strict ByteStrings into a single lazy ByteString that it then passes down to Happstack.

The two above approaches each use a new thread that writes a timestamp, waits for the specified interval, writes another timestamp, and so on. Thus, the two code snippets look pretty similar in basic structure. Not so for using unsafeInterleaveIO directly:

outputManual :: (Int -> Int) -> IO Response
outputManual decr = do bs <- manualClock decr limitedCount
                       return $ streamBS bs
 
manualClock :: (Int -> Int) -> Int -> IO L.ByteString
manualClock decr n = do now <- getCurrentTime
                        tick now
                        allFutureChunks <- unsafeInterleaveIO $ ticksAfter now
                        let futureChunks = map fst $ zip allFutureChunks countdown
                        return . L.fromChunks $ chunkify now : futureChunks
    where ticksAfter since = do now <- getCurrentTime
                                let delta = diffUTCTime now since
                                when (delta < interval / 1000000) $
                                        threadDelay (round (interval - delta * 1000000))
                                now' <- getCurrentTime
                                tick now'
                                futureChunks <- unsafeInterleaveIO $ ticksAfter now'
                                return $ chunkify now' : futureChunks
          countdown = takeWhile (> 0) $ iterate decr (decr n)

Here, all the action takes place in a single thread. Again, it takes the strategy of combining a bunch of strict ByteStrings, one per timestamp, into a lazy ByteString with everything. It generates the first timestamp immediately, but defers building the rest of the list until it’s needed. When it is needed, it again generates the first timestamp of the rest of the list immediately, and defers building the rest.

It’s worth noting that this approach is the most difficult to write, since putting unsafeInterleaveIO in the right place is critical to making it work. Also, since everything is happening in a single thread, strictly speaking it’s no longer good enough to just delay for the desired interval between timestamps, since there’s no telling how much time has been spent in other parts of the code. Instead, it needs to check the clock twice each time around: first to figure out how long to delay, if any, and second to actually get the timestamp. Not surprisingly, the code is also the hardest to read of the three.

Finally, there are some odds and ends that bear mentioning. Here’s a simple function that converts a time value into a strict ByteString:

chunkify :: Show a => a -> S.ByteString
chunkify = S.pack . (++ "\n") . show

Here’s a helper function that prints out a message whenever a new timestamp is generated, so we can watch the app server’s progress:

tick :: Show a => a -> IO ()
tick x = hPutStrLn stderr $ "tick " ++ show x

As you can see from the type signatures of those two utility functions, there’s nothing that makes them unique to time values; any value that can be converted to a string (i.e., anything belonging to the Show class), works. We just happen to only use them with UTCTimes.

Anyway, here’s a simple function for converting a lazy ByteString into the Response object that Happstack ultimately wants:

streamBS :: L.ByteString -> Response
streamBS = noContentLength . resultBS 200

We explicitly tell Happstack not to add the Content-Length header, since to do that it would need to measure the length of the entire response, which would mean it can’t send anything until it sees the entire response, which defeats the entire point of what we’re trying to accomplish.

Last, and also least, here are the constants specifying the time interval between timestamps, and how many timestamps to generate in a finite stream before stopping:

interval :: Num a => a
interval = 100000        -- microseconds
 
limitedCount :: Num a => a
limitedCount = 30

That’s the entirety of the code. If you have GHC installed, you can go ahead and compile the program and play around with it to find out whether or not any of the approaches we’ve tried actually work. Since we didn’t pass any configuration parameters to Happstack, it will start a web server on port 8000 that you can point your favorite browser to, at any of the six paths we defined.

Stay tuned for Part 4, where we’ll see why (spoiler alert!) none of the three approaches actually work.

Happstack and Streaming: Part 2: Lazy IO

Two Haskell features complicate our attempt at streaming data from Happstack: lazy evaluation, and its handling of IO.

Laziness

Unlike more mainstream languages, Haskell evaluates expressions lazily; it doesn’t actually compute the value until it’s actually used. This has some interesting benefits. For example, it’s quite easy to create infinitely long lists without requiring infinite amounts of memory. For example, the expression [1 ..] is a list of all positive integers. Haskell code can pass that infinite list around like any other value, and as long as we don’t do something that requires actually trying to evaluate the entire list (such as trying to compute its length), we’re perfectly safe.

We can even do computations where we create multiple infinitely long lists and do operations on the entire thing, as long as we never try to use the entire result. For example, here we take the aforementioned infinitely long list, split it up into evens and odds, add the two infinitely long lists pairwise, and look for the first element greater than 5,000:

foo :: Integer
foo = head greaterThan5000
    where greaterThan5000 = filter (> 5000) sums
          sums            = zipWith (+) odds evens
          odds            = filter odd positives
          evens           = filter even positives
          positives       = [1 ..]

The five lists defined in the where clause above are all infinitely long, but that’s OK because the program never needs to evaluate more than a finite part of any of them to compute the value of foo. (For the record, it’s 5003.)

So, problem solved, right? The application server just needs to give Happstack a ByteString, which after all is just a compacted list of Word8s or Chars, and that list can evaluate to the data we eventually want to send to the browser, once we figure out in the future what it needs to be.

Sadly, it’s not that easy; you’re forgetting another key property of Haskell that lets lazy evaluation work.

Pure Functional IO

Lazy evaluation works because Haskell is a purely functional language: expressions do not have side effects. As a result, functions in Haskell are much like functions in mathematics: their output is entirely determined by their input parameters, and their only result is producing a new value. Haskell functions can’t reference any values whose value might change, since values in a Haskell program never change. This is why lazy evaluation works: it doesn’t matter when the program gets around to evaluating an expression, if ever, since its result will always be exactly the same.

However, this seems to prevent a Haskell program from interacting with the outside world, since the system running a Haskell program, much like the rest of the universe, is not purely functional. Any operation that interacts with the world outside the Haskell program could be affected by whatever happens to be going on at the time the operation is run.

As a simple example, consider the time() function in C programming on Linux. time() returns the current time on the system, and will obviously return a different result depending on what time it is when it gets called. Reading from a file is similar; the result returned will depend on what’s stored in the file at the time it’s read, which could change if something else writes to the file.

Interaction with the outside world is needed for a program to do anything useful, so how does Haskell get around this? Via a bit of trickery known as the IO monad. Monads can be a bit tricky to get your head around initially, but basically they’re just a way to sequence operations, with the monad doing something to take the output on one operation and give it as input to the next. The particular monad being used gets to decide what “sequencing operations” means, as though you could redefine what the semicolon means in C. Although most monads have a way to both put values into and take values out of a monad, the IO monad only lets you put values in. Although you can also run a function on the value inside the IO monad, the result will itself also be in the IO monad. There is no escape from the IO monad.

What’s the point? Conceptually, the IO monad is just another type of state monad, which carries another value (the state) from one operation to the next. In the IO monad’s case, that state is merely the state of the entire universe outside the Haskell program. Anything needing to interact with the outside world runs inside the IO monad, which as a result orders those operations into a particular sequence.

What does this have to do with anything? Recall from earlier that our use case is streaming the current state of an interactive game as it changes over time in response to input from the players over a network. That input-over-a-network is IO and thus runs inside the IO monad, and thus too must anything that uses the result. So really, our hypothetical function that creates the data to stream back to the browser doesn’t — can’t — just return a result of type ByteString. No, it has to return a result of type IO ByteString.

The good news is, the function the application server implements for Happstack to create a Response runs in the IO monad, so this is legal. The bad news is that the IO monad truly sequences operations: the entirety of our hypothetical result-creating function has to execute before the result can be given to Happstack to send it to the browser. Either the result-creating function returns right away, and thus can never see the result of other players’ actions later, or it waits until those are handled, and can’t return anything until the game is over.

It seems that the rules of the IO monad prevent us from making this work.

Screw the rules, I have money lazy IO!

Lazy IO

Lazy IO lets a program bend the rules of lazy evaluation and IO sequencing a bit. For example, consider the readFile function in the Haskell standard library, whose type is the following:

readFile :: FilePath -> IO String

Superficially, this seems to read the entire file in memory before returning, per the rules of the IO monad. Which would make the following program extremely ill-advised:

import Data.Char (ord)
 
main :: IO ()
main = do zeroes <- readFile "/dev/zero"
          print . take 20 $ map ord zeroes

It reads in the contents of the file /dev/zero, converts the characters to their Unicode code point values, and prints the first 20 of them. However, on any Unix-ish system, /dev/zero is a file that contains an infinite number of zero bytes. A program can read from it as long as it wants, and never reach the end.

The Haskell program, of course, doesn’t know about this property of /dev/zero, yet readFile doesn’t try to read an endless series of bytes into memory. Why not? Because readFile is a bit special; it does its IO lazily.

readFile isn’t alone. The getChanContents function is similar:

getChanContents :: Chan a -> IO [a]

It takes an object of type Chan a — a thread-safe unbounded queue of objects of an arbitrary type — and returns an infinite list of all items that are currently in the channel, as well as all items which will ever be written to the channel in the future. It, too, does lazy IO.

How can this be? If you dig into the source code of how these and similar functions are implemented (thanks to the Glasgow Haskell Compiler’s open-source license, you can easily do this), and trace through the calls they make, you ultimately come to this interesting little function:

unsafeInterleaveIO :: IO a -> IO a

The unsafeInterleaveIO function converts any normal IO computation into a lazy one: one that executes not when the IO action would normally run, but instead when its value is actually used. It is implemented using the deeply magic function named unsafePerformIO, which takes that “nothing ever escapes the IO monad” rule and punches it in the face:

unsafePerformIO :: IO a -> a

As you might guess from the fact that their names both start with the word “unsafe”, and that they’re in the module named System.IO.Unsafe, these functions are dangerous, since they let you bypass Haskell’s usual efforts to make lazy evaluation and IO not stab each other in the back. In certain cases, lazy IO can be mostly safe. For example, with readFile, you’re usually in trouble anyway if program A is reading a file while program B is writing it. As long as nothing is actively writing to a file, it doesn’t matter whether it gets read eagerly or lazily, since the data will be the same either way.

Of course, you’re better off using the functions that use unsafeInterleaveIO and friends rather than using them directly. As a general rule, you’re taking matter into your own hands when you use functions prefixed by the word “unsafe”. As the saying goes, if it breaks, you get to keep the pieces.

However, the existence of lazy IO offers a few possibilities for how we might try to make streaming work without modifying Happstack, thanks to lazy IO.

  1. Use an OS-level pipe. Pipes have two ends: one for writing data into it, and one for reading data out of it. Once they’re created, each end of a pipe can be treated like any other file. Fork a thread to write data to the pipe, which the original thread lazily reads back out as a ByteString.
  2. Use a Chan. Fork a thread to write data to the channel, which the original thread lazily reads and builds a ByteString from.
  3. Use unsafeInterleaveIO directly to lazily generate the ByteString as needed.

You know, this is starting to look like it might actually work. I won’t mention the crippling flaw shared by each of these options, however, at least not just yet. (They also each share a second, non-crippling but still significant flaw; a careful reading of RFC 2616 might give you a clue what it is, if you can’t bear the suspense.) It’s better if we try implementing them and experience how and why they each fail, as will any approach that doesn’t involve modifying Happstack somehow.

We’ll start doing precisely that in Part 3.

Happstack and Streaming: Part 1: Introduction

Introduction

The question: is it possible to use Happstack to serve streaming data?

Spoiler alert: the answer is “no”. At least, not with the current version of Happstack (0.4.1). However, exploring the reasons why it isn’t possible sheds some light on what changes you’d need to make to Happstack to make it work. This five-part series will explore that topic in some detail, including code that almost-but-not-quite does what we want.

What is streaming, anyway?

For this discussion, I’m talking about generating a series of bytes in real time from a web application server and sending those bytes to a web browser as part of a single HTTP response message.

This is different from simply sending a large file to the browser. In that case, file already exists on the disk, and can simply be read from and sent across the network as quickly as the client can keep up. TCP automatically takes care of all the ugly details of how you do that reliably; from the application server’s perspective, you send the whole thing in one giant chunk and let the network worry about the rest.

Here, however, the bytes we want to send aren’t known ahead of time. The use case I have in mind is a browser-based game where the current game state changes continuously. It’d be nice to have the server send all those updates as part of a single response, with the browser acting on updates as they are received. The alternative is to have the browser repeatedly poll for updates, sending a brand new request each time it wants the latest information. Polling is more complicated to implement because the browser now needs to worry about timing its update requests, instead of just letting the server send them when an update is ready. (Recall that HTTP uses a client/server, request/response model, preventing the server from sending data to the browser without the browser first asking for it.)

The game scenario is what prevents the server from knowing what bytes will be sent to the browser beforehand, since the game state depend on what moves the players make as the game progresses. To stream updates, the server needs to start sending the response before it knows what all the data that will be included in it are.

Thus the question: if we use Happstack to write the web application server implementing the game, is streaming updates over a single response even possible?

Happstack model

The core part of any application written for Happstack is a function that takes the request from the browser and returns a Response object that gets sent back to the browser. (Granted, that’s a very simplified version of what you pass to simpleHTTP, but it’s good enough for our discussion here.) Happstack takes care of all the details of actually sending and receiving data over the network, converting things into Haskell objects, and the like.

In Happstack 0.4.1, there are two kinds of Responses, only the first of which is of concern to us. (The second kind is optimized for sending existing a preexisting file to the browser, which precisely not what we want.) A Response is really just a (lazy) ByteString, along with HTTP metadata like a status code and a set of headers. Happstack provides a lot of tools to help set it all up, but ultimately the application is responsible for providing the ByteString to be sent to the browser. Once we give Happstack that ByteString, it’s out of the application’s hands.

At first glance, this seems to preclude us from returning any kind of real-time stream. After all, how can the application possibly return a value that it can’t compute until some point in the future? That doesn’t just sound difficult; it sounds physically impossible.

Actually, it turns out that’s the easy part, as we’ll see in Part 2.

Comments Off