Friday, July 20, 2012

ICFP 2012 Post Mortem

Update: The results of the first round have been published. Our submission was eliminated in the first round (not a huge surprise) and we got 154th place out of a mere 221 actual submissions that the organizers actually received. We definitely could have done better, but this is not that awful especially considering some of the bugs that were found after the submission. Only the top 117 submissions advanced to the next round and we were not that far off. Ah, now I can't wait until next year...

After a very long coding session with very little sleep, ICFP concluded and my team did fairly well for my first time on my own and my teammate's first time all together. In the end, we did submit a program, which is more than I can say for some years that I have competed (and I do mean seriously competed and never submitted, not just tinkering around).

Whew, that was close…

I am always surprised at how long it takes to package up your program and submit it. I got our submission in at 6:59 AM, less than a minute before the deadline. It was quite the nail biting finish, to be sure.

The Successes

I found this year's problem to be excellent. I never felt that I was completely lost for long periods of time and yet I never felt that I would completely explore all aspects of the problem, either. This years problem seemed much less mathematically deep than those of some other years, but it was interesting and very easy to get started with, which makes for a very nice ICFP contest problem.

The problem this year involved writing a program that played a little mining video game (here is a javascript version written by another contestant). A good submission would find an optimal path through the mine to collect as many "lambdas" as possible, avoid dieing (rocks falling on top of it or drowning), and find its way to the exit. This year's problem was very accessible. You could submit something immediately that would be a valid program (something that just printed an A for "abort"). Further, there was basically no barrier to improving that solution.

What was suspiciously lacking this year was the deep math or computation theory under it all. This seemed to me to be a basic "search as best you can until your time runs out" sort of problem. Then again, we didn't have a real computer science guy or mathematician on the team this year, just a web developer and a computational physicist, so I wonder what other people found that we missed.

Also, I am pretty shocked that the organizers didn't allow people to submit maps to try other peoples robots against. I guess it doesn't quite jive with the storyline, but you could make it fit. Adding direct competition is a good way to make a problem adaptive to the skill set of the contestants. Also, no leader board? Bummer. But these are minor things. The task was all in all very good.

The organizers were also excellent. I say that because I never had to contact them and I never felt (for long periods of time) that the problem description made no sense. Even when I felt (and might still feel) that the task statement was ambiguous, or in some cases just plain wrong, they provided ample examples to clarify at least how they interpreted the task statement's meaning.

My teammate was also excellent. He stuck with it through the entire weekend for very long hours and provided much help in thinking out solutions and strategies as well as hashing out the meaning of various parts of the problem. I only hope he found the contest as fun as I did. All in all a pretty good year for ICFP. Definitely the most fun I have had with ICFP in a long time.

The Remote Development Environment

The collaborative editing tools were mostly successful. There are some annoying (and productivity hurting) bugs that we hit using Rudel, usually when disconnecting and reconnecting later, but for the most part it worked quite well. The lack of graphics were a real detriment for me. I could have set up a visualization program very easily if there was a good way to use it over the network, but instead we fought with rendering the maps to Emacs buffers. My teammate suggested we use Flash for visualization, something he has a lot of experience with. While I was reticent of the idea at the time, due to my aversion to using flash in general, I now think that this would have been a good course of action.

In the end, however, I very much enjoyed text based setup we actually put together. This allowed us to have meaningful printed representations which are invaluable for debugging a program, something that we would have implemented anyway. After three straight days of watching our robot wander around a mine I feel like I have been playing Nethack for the last 72 hours, battling a wily rust monster. On the right is an example of our robot performing a search on a mine I made up using a particularly bad search heuristic. Each frame is a system state the robot is considering in the A* search.

In my opinion, the usage of Rudel made for much less "work off the reservation", basically people working on some aspect of the problem that really isn't that important to the group. In addition, the fact that we were working on the same image gave a certain urgency to changes that temporarily broke functionality. I still think this is a net positive for development of this type, but I would also like to try it more to determine if it truly is.

I also feel that I did an good job organizing the team save for the number of people I was able to attract (a bit of a pat on the back, but whatever). The EC2 server where we hosted Lisp image worked pretty flawlessly. Google+ Hangouts worked very well for sharing peoples screens (although Google transcoded the video to an illegible resolution upon the YouTube broadcast). This all worked very well, all things considered. The end bill for running a "small" instance on Amazon EC2 for the weekend, plus a two days before to upgrade to Debian Testing and to make sure everything was working, plus storing the hard drive images was around $10. In my opinion this is very reasonable. Below are the recordings that we took of the coding sessions. We stopped on the second day and what is there is long, unedited, and, as I mentioned, mostly illegible, here it is anyway.

I think that actually developing on a machine very similar to what it will be judged on was a good idea. In the end, I built an executable core image and included a simple script to build the executable. I believe that I screwed up and I needed to explicitly install SBCL in the install script, which I did not. Luckily, because the image should be binary compatible with the judges machine, even if the install script fails to run, it will still use the binary that I included in the tarball.

The Shortcomings

As good of a year this was for ICFP, there were some disappointments, for me at least.

The biggest disappointment this year was that I was unable to get more people from the small Common Lisp community interested in this team. Posts on Reddit, Lispforum, c.l.l., and #lisp, and #lisp-lab, even Hacker News resulted in a only one interested teammate before the time of the competition. When the competition started, our live coding of the event drove some limited interest and we got two or three interested parties, but none of them were really able to participate with us for various reasons: one individual was confused about the time frame of the competition and missed it, one couldn't get Google+ Hangouts to work in their country (IIRC), and another wanted to watch the Rudel session and may have contributed, but I was understandably timid about granting either ssh access to an unknown party or opening the Rudel server port to the Internet (Rudel doesn't have a read only mode that I know of). Each of these issues could have been avoided with even a single day of prep/testing time, but I probably could have found a solution with a lower barrier to joining in mid-competition.

I do wonder, however, how much of the low participation was due to the tools I chose to use. Not every Common Lisp programmer uses Slime and Emacs (or some other hypothetical Swank and Obby capable editor). Some others may have been turned off by the Google+ usage. I imagine that other Lispers participated, just not with us. Perhaps we can do better next year.

The time separation (7 hours) was actually a pretty big problem. The idea was to have a meeting before one of us went to sleep and when one woke up to catch everybody up. This proved difficult as by the time we got around to a meeting, one of us had usually passed out. It is hard to stop time sensitive work for a meeting. That said, we dealt with it and overcame. More people would have alleviated this significantly, particularly if they are scattered around in different time zones.

One real criticisms of myself is that I didn't take on a leadership role. In retrospect I think that I should have. Not be a hard ass, mind you, but to provide a direction that I wanted to work towards. Once someone sets a direction, I find it is easier for someone else to voice their dissent, which is a good thing. I also suffered from a bit of the software architectitis, where I started making packages and files for no good reason. In the end we had around 9 files and than 6 packages, some circularly dependent. Because of this complication of the program, things needed to be refactored at the last minute to get a submission together and that refactoring proved more difficult than it should have been. This was one of the main factors that resulted in us barely submitting in time.

The last irritating bit was that, as per usual, I didn't quite perform as well as I wanted. The way it should have gone (giving a liberal amount of time for each step):

  1. Friday 13:00 UTC: We are basically playing Dig-Dug, okay! (Up until the very end I was expecting fire breathing dragons to show up)
  2. Friday 14:00 UTC: We need to find good paths in a graph
  3. Friday 15:00 UTC: Parser implemented
  4. Friday 20:00 UTC: Simulator implemented
  5. Saturday 00:00 UTC: A* search Implement
  6. Saturday - Monday 12:00 UTC: Alternating updating the simulator and tweaking heuristic function until it works well
  7. ? UTC: Some brilliant insight that makes our program awesome

We were roughly on schedule until step 4, IIRC. Things didn't continue on schedule for a number of reasons. First, everything just takes longer than what we think it should. One big setback was when I implemented the simulator in a way that was not using the task descriptions notation, my teammate wanted it using that notation (so index values matched, etc.). It was a valid concern, and I had to change it. This caused more bugs than would usually be there in a very crucial part of the program, but it may have avoided other bugs cropping up from confusion in the later development. This is not a huge set back as these bugs just changed the rules of the game and didn't crash things, so certain aspects of the game were weird from time to time. Looking back, the thing that took way more time than it should have was the decision to settle on an A* search for finding solutions, which should have been the first thing we tried and tweaked from there. By the end of day two of the competition, I had grabbed Peter Norvig's PAIP code and was using this as a search mechanism (using beam search). It wasn't until the middle of day three that I switched to A* which is much more appropriate. Besides making me question my competency as a programmer, this really made me question why we don't have the code from PAIP in an easy to download (a.k.a Quicklisp accessible) and use form. I think I will make this one of my projects, maintaining a modern version of some of Norvig's code, particularly the search tools (I have long wanted to package an improved version of the CAS he provides).

Programming competitions and software development

As I often feel after an ICFP competition, I think that I could do much better if I attempted it the next weekend. I'm not talking about giving us a few more days. That wouldn't have helped at all. I was pretty exhausted by the end of it. I mean that competing in the ICFP contest, getting code out the door in a short period of time, requires a certain frame of mind that is distinctly different from my usual state of mind when developing. You have to re-adjust how your brain works in order to be successful in these short deadline competitions.

I simply do not work this way in my job or hobby projects. I tend to enjoy the process of developing a well thought out program, or better yet, a document, as I very much like literate programming. A well written program, in my opinion, is a program that reads like journal article. This is not a conducive mindset for the ICFP contest, to say the least.

At the start I was using Git to keep track of changes. This is a lot of development overhead, but that overhead tends to be worth it in software development, even beneficial, but it is a downright detriment to your productivity in a competition like ICFP. When I finally stopped using Git, things went a lot faster. At the start, I spent time thinking about organization. Later, I just hacked code wherever it would fit. Again, this eliminated a lot of development overhead. There were even a few moments when I started writing documentation for the theory of why I was doing this or that. This was quickly stopped when I noticed what I was doing. It was as if I was unlearning any and all good practices that I have learned over the years in order to compete in ICFP effectively. After I got into a quick development mindset, the whole thing became much easier, but it took days for me to reach that point.

I have come to the conclusion that being good at short deadline contests relates to actual software development about the same way as running relates to driving a car. Both running and driving move you places, but they are extremely different activities. Granted, you do have to worry about similar problems: don't run into things, you need some form of fuel to get you moving, you have rubber objects that insulate you from the ground, but being a good runner doesn't make you a good driver, and a good driver doesn't make a good runner. You are thinking about two vastly different sets of important factors when you are doing each. You optimize for different goals and each is good for achieving different things. Driving is clearly faster for long term goals, but it is often faster to sprint a short distance than to get into a car and drive there, particularly if the path is off road. However, as with running and driving, I see no reason why people shouldn't train for both if they want to be good at both.

Wednesday, July 18, 2012

Efficiency of Pseudo Random Numbers in Lisp

I came across Alasdair's posts regarding his exploration of Pseudo Random Number Generators, or PRNG, in high and low level languages. These posts show timings of a hypothetical PRNG for different implementation languages and "big integer" libraries. This PRNG is defined as:

\[ x_{0} = b = 7,~~ p=2^{31} - 1 \]

\[ x_{i} = b^{x_{i-1}}\pmod{p} \]

His python version follows (you can go to his site to see the C versions):

def printpowers(n):
  a,p = 7,2^31-1
  for i in range(10^n):
    a = power_mod(7,a,p)
    if mod(i,10^(n-1))==0:
       print i,a

When I read this post, I wondered how SBCL would stack up, so I took literally one minute to port the C version into a Lisp version (the python version might have went faster) and evaluated it in SBCL. I spent absolutely no time at all thinking about optimization. Afterward, when writing this post, I made a few cosmetic changes to make it a little more pleasing to the eye but the execution is identical. My Lisp version looks like this:

(defun print-powers (n)
  (let ((b 7)
        (p (- (expt 2 31) 1)))
    (iter (for i below (expt 10 n))
      (for x
           initially (cl-primality::expt-mod b 7 p)
           then (cl-primality::expt-mod b x p))
      (when (zerop (mod i 100000))
        (collect x)))))

Here are the times taken to calculate one million PRNs in the various implementations. Because I am comparing implementation across different system, I cannot compare directly, so I compiled the GMP version and used that to give a scaling factor (I tried to also test how the Python version compared, but that wouldn't run with the information Alasdair provided). This means that only the bold values in the "calibrated time" column are actually measured. The others are approximated by assuming the performance differences between my machine and Alasdair's can be summed up as a simple numeric constant. Results are in seconds.

ImplementationReported timeCalibrated time
Python74.110
C GMP.901.3
C no big ints1.52.2
C MPIR.781.1
C PARI.52.75
C TomMath13.18.
Lisp (SBCL)1.8

Judging from the fact that Alasdair describes himself as a high level language programmer and this is his first foray into the world of low level optimization, it is safe to assume that some of these numbers could be made significantly lower. In particular it is surprising that the C calculation without any big integer math at all does so poorly. Also, the extremely poor performance of TomMath is something to be investigated. Be that as it may, the "one minute to write" Lisp version holds up pretty well against all competitors in the timing tests and absolutely shines compared to the Python version. It maintains Python's readability and is nearly two orders of magnitude faster.

Though I am quite fond of several aspects of Common Lisp, I am not really in the business of promoting Common Lisp as an alternative to everything. That said, I absolutely do think it has a place in the scientific and mathematical computing, quite possibly more than Python does. I think that Common Lisp gives a clear path from tinkering at the REPL, to quickly prototyping an algorithm, to optimizing it if necessary, then porting to a lower level if absolutely necessary. Much the same can probably be said of Python, the crucial difference is that the initial algorithm prototype in Common Lisp will likely be within a factor of two of your initial C implementation, rather than a factor of 100. The fact that most software in academia never goes past the prototyping phase (it usually isn't necessary to optimize in order to get a publishable result), makes this all the more important. These results reinforce my opinion that Python should not be used outside of a glue language for low level libraries and, of course, I/O bounded programming such as user interfaces.

That difference between Python and Lisp (SBCL in this case) is partly in the compiler, to be sure, but also in the community and its opinions of what is important. If you are willing to wait, that gap in the quality of the compiler will certainly become smaller but that might not be a priority for the Python community. While you wait, some other language may come into vogue, perhaps improving or hurting performance. I think that in the mean time, however, Common Lisp has a very large advantage in this area that should not be ignored.

Saturday, July 7, 2012

Quicklisp July Update

I was scanning the IRC #lisp logs for any replies to my ICFP advertisement and saw this:

19:41:31 *Xach* adds a bunch of smithzv libraries

That seemed odd, I checked the Quicklisp libraries and it had nothing new in them. Then the announcement of the July release came across my RSS feed today and I was surprised to see:

New: asdf-encodings, backports, cl-6502, cl-factoring, cl-libusb, cl-neo4j, cl-nxt, cl-openstack, cl-permutation, cl-plumbing, cl-primality, cl-protobufs, clx-xkeyboard, coretest, hh-web, lisp-interface-library, parse-float, pythonic-string-reader, recur, single-threaded-ccl, sip-hash.

Emphasis mine. This is pretty awesome, four libraries I maintain have found their way into the Quicklisp repository. I guess the work I did refactoring some of my libraries paid off.

This is kind of scary, as well. This means that more people are using my libraries, even libraries that haven't seen an official "this is ready to go" stamp of approval from me. While it is certainly true that CL-Primality and CL-Factoring are solid and ready to go (CL-Factoring is largely centered around another man's work done years ago, I'm just bringing it up to date), the library CL-Plumbing is kind of not extremely useful (yet) and I really wanted to integrate Pythonic-String-Reader with Named-Readtables or something like it to modularize the read table changes that it effects. CL-Plumbing, in particular, has already been changed incompatibly in my local repo.

That said, it is very good to see vibrancy in the CL community, even if it means some scrambling on my part every once in a while and some non-backward compatible interface changes here and there.

Adventures in Collaborative Coding With Common Lisp

Update: I've posted a post mortem of the team's attempt this year.

In anticipation of the upcoming ICFP contest (by the way, still looking for team mates, we could use a handful more before I will feel we will saturate our workload), I started looking into collaborative tools for coding. I am aware of a large set of tools that might be useful. This post will describe some these and how we might use them. I am looking at using some subset of Emacs (of course), Slime, Rudel, Mumble, Google+ hangouts, VNC, X11 forwarding (perhaps using XPra), Dropbox, perhaps Git, and naturally ssh to tie it all together.

Collaborative Editing Topology

The basic topology will be like this. I don't know if this helps anybody, but it looks pretty.

Communication between collaborators happens via Mumble and Google+. Google+ has the nice feature that whatever happens in the Hangout will be mirrored to a live Youtube stream and will be saved for future viewing. Files can be exchanged using Dropbox. Rudel allows us to quickly work together and see what others are doing.

The production server is communicated with via Slime/Swank, X forwarding and/or VNC, all via an ssh tunnel. We need X forwarding or VNC in order to make any sort of graphical stuff painless (well, less painful). After experimentation, this is still quite painful. Still looking for a good solution here.

At the end of this post is a pair of videos of a collaborative session I had with one other person on Thursday. The pair of videos are all together quite long and the quality of the video is quite low (much lower than during the actual Hangout), so low that you cannot actually read the text. I'm still trying to figure out how to save the session data well. This was my second attempt to get some kind of example for this post and I felt I couldn't sit on it any longer with the ICFP Contest quickly approaching.

The production server

The first step is to setup a server that can host your Lisp image. This server can really be anything, but you should keep in mind that giving users swank access to a server, is basically giving them shell access at that the Lisp image's privilege level. This means that unless you really trust your collaborators, you should be wary of using a server you care about.

I chose to host off of Amazon EC2 as you only pay per hour of use, so I can start up a fresh system, set it up, and ditch it hours or days later without paying for a month as you might in other places. In a subsequent post (to be posted soon, I hope), I will detail how to set up an EC2 instance for this purpose.

This server will host a Rudel session, a lisp image with a swank connection, optionally a VNC server and Mumble server, and be connected via Dropbox.

Slime/Swank

Most Common Lisp people are probably intimately familiar with Slime and Swank. We are going to be using Slime and Swank to set up a communal Lisp image. Multiple users are going to connect to it and awesomeness will ensue.

We can also have local Lisp images for quicker and/or dirtier work (we don't want to eval broken code on the communal server if we can help it) and anything that needs graphics to run. This is simple and Slime/Swank is ready to go using M-x slime-connect. Use the Slime selector to quickly switch between open connections.

Rudel

Rudel is a collaborative editing library for Emacs. It can use many backends, but we will be using the Obby backend as that was the only one that was easy to set up. Once Rudel is loaded, one person can host a session, which is then joined by any number of participants. We will be hosting the rudel session from the production server. Buffers within Emacs can then be published by one party and subscribed to by any number of other people. After this, that buffer on each computer will hold the same contents, updated in real time as the people code. Text edited by a particular user will be marked in his/her specific background color. There are some packages you will need:

apt-get install emacs23 gnubin-tls avahi-daemon avahi-utils

Setting up Rudel is easy so long as you get the correct version (the one from SourceForge). Once you download and extract it, just add this single line to your .emacs file.

(load-file "~/.emacs.d/rudel-0.2-4/rudel-loaddefs.el")

Note that Rudel uses "C-c c" as a prefix command, which is weird to me, so if you use "C-c c" for anything, either remove that binding, or bind that after you load Rudel, so you can effectively clobber their bindings.

With a couple exceptions (see below) Rudel is pretty painless to use, just join the session, subscribe to some buffers or publish your own, and start editing. It is a good idea to have your Rudel session hosted by an Emacs instance on the production server (so you don't have to kill your Emacs to reset any problems). This is also a good idea just so your computer isn't the single point of failure for the team. You can go to sleep and shut down your computer without effecting others.

One issue for lisp programming is that you can't share the REPL buffer (slime-repl-mode can't be simply turned on and off, nor can you insert text into it all willy-nilly like Rudel assumes it can). However, you can share a Slime scratch buffer, or any buffer that is in slime-mode, which is basically just as good. Google+ allows you to make more involved presentations at the REPL between collaborators if that is needed.

One annoying thing about Rudel is that it seems to be impossible to actually leave a session. When you attempt to leave the session via rudel-end-session, you are disconnected and unsubscribed to all of the buffers, but your login remains and the server keeps the connection open (I believe). This doesn't seem too bad until you try to join again and realize that you can't because your username (and possibly color) are currently in use. To get around this, I just append a number on the end of my username in order to make it fresh every time. Regarding colors, I just pick a garish one when logging in and then change it to something better once I have joined (once you have joined, you can have the same color as someone else). Most likely, most people will work with colors turned off anyway.

Another annoying but (logically consistent) feature of Rudel is that M-x undo will undo other peoples edits as well as you own. This is something which is sometimes desired, but often times not if there are two writing code concurrently. If kill-undo is burned into your muscle memory instead of kill-yank, then you might have some problems. I am trying to come up with a work around for that particular case. Other times can be handled by simply specifying the region and using undo within that region (see the undo help page).

Git, Rudel, and Dropbox

The summary of this section is that these tools don't work together, at all, at a fundamental level. Use Rudel. People can use Git on a person by person basis. Just give up on the idea of sharing a source directory via dropbox. It is a lost cause to try combining Dropbox, Rudel, or Git simultaneously. Be warned that Git will be crippled when using it this way, you can't do any of the good git stuff like branches, reverts, and merges, as it will mess up everybody else's Rudel buffers.. Always unsubscribe, do any fancy Git commands you like, then republish (possibly under a different name) or subscribe and replace the entire file (presumably with the approval of the people sharing the file).

While I have never participated in a short dead line contest and actually used a version control system, I am sufficiently sold on the idea of distributed version control that I would like the option of using it here. Using a version control system has kind-of been integrated into how I think development should be done in general. Development should be broken into smaller, separable tasks which should be made as commits with commit logs telling the future developers (a.k.a. you) why this was done. Developing without it would make me feel naked, or at least haphazard.

That said, there is a problem with using git and Dropbox at the same time. In fact, it is a very fundamental problem. Dropbox is attempting to make two or more directories seem to be the same, no matter what computer you are on. Git, on the other hand, explicitly works under the assumption that the directories are on two different computers and are absolutely independent.

For an example of this conflict, consider a group of people collaborating on a project using Dropbox. As one person edits files on his computer, these edits are quietly sent to the other computers (which causes you to have to constantly revert buffers in Emacs and leads to conflicts, but let's say we are okay with that). If you are using Git, any change to the repo will also be synced. This seems good at first, until you realize that the index is in the repo. This means that you can't develop like Git wants you to develop, incrementally building up the index, crafting your commit, then committing. Each developer would step on the others' toes as they add to the index. Instead you need a process where you build your index in your head, then put a freeze on development to commit, e.g. "hey, nobody do anything, I am going to commit something." This basically eliminates most of the positives that Git brings to the table.

I tried several schemes of moving the .git directory out of the Dropbox folder which fixes this whole index problem. When you do this, you get back all of that Git goodness, but you lose the idea of a synced repo, so why use Dropbox at all? In fact it is worse that that. You now have two conflicting ideas of what the merged repo will be. You cannot combine the two, only discard one and accept the other. So, I submit, that Dropbox and Git just do not mix for this purpose. It can't be done in a sane way.

Everything I just described regarding Git and Dropbox is also true of Git and Rudel. Rudel, however, comes with the extra limitation that you can't just change files on disk anymore. The buffers might be saved to your disk, but the real buffer is in the "cloud". So, changing the file on reverting to a file on disk will break Rudel. From what I have seen, your buffer will no longer be in sync with others. It is important to note that you could actually reconcile this limitation by replacing revert-buffer with something that edits the changed lines in a Rudel approved fashion. But right now, this is not supported.

I lean towards using Rudel as I feel it is more important. We will still use Dropbox for easy file transfer, and people can still try to use git so long as they don't attempt to use any commands that will change the buffers on disk. No one should try to concurrently develop in the same synced Dropbox folder, though. I might setup a backup script that will run every 5 minutes or something and take a snapshot (using rdiff-backup, for instance) of my files so that we can roll back to previous versions if everything hits the fan.

Another thing that can be done with Git is to have a single person in charge of version control of a given file. That person will watch what other people are doing and make commits as needed. That person also institute reverts, branches, and merges, but such actions really need to be done via a safer mechanism than a simple git-checkout or git-merge. The person in charge of version control should really unpublish the file and republish it once the change is made.

Google+ and Mumble

I really like Google+ hangouts, they are about as close sitting at the same desk as someone else as video chat has ever gotten. As nice as Google+ is, it is a bit annoying to have to leave that CPU/network hog running non-stop in order to communicate. This can be partially handled by Mumble, a VoIP push to talk program. It is light weight and can be left running non-stop without many issues. I'm not sure what will be better in practice.

VNC and X Forwarding

VNC has a lot of issues when it comes to connecting two peers, particularly two peers that might be using an ISP that won't allow incoming connections from the Internet. It will work well with a one computer acting as a server that is accessible from the Internet. However, ever when you have VNC working, you can usually count any OpenGL out of the equation. Attempting to use OpenGL on EC2 resulted in a crash of the Lisp system, if I remember correctly. Replacing the OpenGL drivers with a software renderer might help (in fact it seems necessary as the EC2 server has no video card for a hardware driver to make sense). But the main issue is the lag, which is pretty bad, and the general frame rate. However, you can certainly setup a GUI with buttons, combo boxes, static images, etc, and it will work fine. The only real issue is real-time graphics.

X forwarding is another option, it can be made pretty efficient with the help of XPra or NX, and with XPra, at least, the window can be detached and re-attached by someone else. But this is not really collaborative, though. To my knowledge, there is no way to use this technology (or technology like it, e.g. XPra or NX) in a collaborative way. If you do choose to use it and you are using EC2, I could only get it to work if I installed the software rendering drivers (otherwise the Lisp system crashed, this is actually not that uncommon if you are using CL-GLUT).

The Experience

This is a really neat experience. Rudel and sharing a Lisp image was a very new experience to me, and it felt like there was a lot of potential there. It will take me, and probably others, some time to actually wrap my head around all of the implications here. I often times found myself forgetting that I can edit the buffer while someone else is editing something else, or that I can evaluate that code and it will instantly become available to the other users. I can only imagine that this parallel development could scale nicely with more people. You do need to coordinate the development, but this is always true. Problems need to be broken into distinct, separable subtasks and the solutions to those subtasks need to have well defined interfaces in order to prevent breaking other peoples code, but this is, again, always part of any development with more than one person. There are also times where it is very clearly a win, for instance, when writing unit tests or debugging at the REPL.

Of course, if you are really interested in playing with this, hopefully this post and optionally my subsequent post on setting up EC2 will allow you and your friends to try this out yourself. Also, I'd once again like to put in a plug for my ICFP team, join us, it will be fun. Beyond that, at least for the time being, I will put out a standing offer that if you want to have a collaborative coding session with me, in Common Lisp, let me know and I will probably be happy to participate.

Here are the videos of the coding session. Again, I apologize for the quality and the slowness of the development (Oleg and I are still learning each other's style). The task we were setting out to accomplish was to design a program that could solve a maze.


Other things we could have used

There are tons of tools out there. I am aware that you can do a lot with communal Screen sessions if you are willing to limit yourself to the terminal. You can also just run Emacs over X11 forwarding (Emacs has the capability to spawn frames on different displays). This might work, but some have said that Emacs can freeze if one of the users drop their connection (I suppose without closing the window).

If anybody knows of any other awesome tools, or a better set up like this, please comment below, I'd love to hear about it.

Wednesday, July 4, 2012

Ubuntu 12.04 vs. Emacs Key Bindings

As I am becoming more at home in the Unity interface I came across the annoying problem that Unity binds the <Control><Alt>t chord for starting a terminal window. While that is a fine chord for that, and I believe that starting a new terminal should be as easy as possible, that key-binding is already used for something I use even more frequently, transpose Sex-Ps in Emacs.

In the past this could easily be rebound by using ccsm, the CompizConfig Settings Manager, and entering the Gnome Compatibility plug-in and setting it to something else. Unity (at least what comes with 12.04) ignores this binding, it seems. In fact it seems that there are several places where this binding might be set. I remembered stumbling upon a list of bindings in MyUnity, or UbuntuTweak, or some other third party tweaking app, but I have long since forgotten where that was. But using every place I found, disabling that key-binding never had any effect.

I finally took the time to work out a solution today. The solution is to get my hands dirty and use gconf-editor directly. I don't think gconf-editor is included in the default install of Ubuntu, so you need to:

sudo aptitude install gconf-editor

Start the program and search for keys that have the word "terminal" in them. I found three places where that key-binding was specified and I wiped out the value in each, though it was the first key value that seemed to matter. Then if you wish to set a binding, run CompizConfig Settings Manager and edit the key binding to start a terminal under the Gnome Compatibility plug-in. Once again, CompizConfig Settings Manager doesn't come installed by default, so:

sudo aptitude install ccsm

I don't think that Canonical is really interested in promoting deep customization of their OS or window manager, something that is not very GNU/Linux or Libre Software like. This is a very different zeitgeist from the GNU/Linux of a decade ago, or even five years ago. Maybe that is why I had such trouble changing this binding. How is this not a bug that would have been fixed in 11.04? It's all fine, though, so long as they don't take that extra step of actually obstructing people from customizing things.

Update: I recently reinstalled Ubuntu 12.04 and used none of the old configuration files in the new install. After this, setting the shortcut under Settings -> Keyboard -> Shortcuts (tab) -> Launchers does the correct thing. No need for gconf-editor or ccsm or anything else.