September 25, 2009

Austin GDC 2009

agdcLast week, a number of us from the Project Darkstar team were in Austin for the Austin Game Developer's Conference. Like last year, we had a large booth on the expo floor. However, while last year we were largely focused on demonstrating Project Darkstar's capabilities to scale and distribute load across multiple cores and processors on a single node, this year our focus was on showcasing the team's progress on multi-node capabilities. Here's a recap of the week's events:

If you've been following along with Project Darkstar's progress over the years, you know that transparent multi-node scaling capabilities are one of its main attractions as a platform. On the other hand, you should also know that with Project Darkstar being a still maturing research project in Sun Labs, these features are not yet done. In fact, we have still not proven if what we are attempting can even be done. Our fearless lead architect, Jim Waldo, has put together an excellent series of posts on his blog outlining why and how we hope to achieve this multi-node scaling. He calls it the "Four Miracles."

Our goal for Austin was to put together some measure of a compelling visual demonstration of our current progress towards achieving these "Four Miracles." Specifically, we hoped to show how Ann's miracle of transparently moving clients from one node to another in a cluster worked in tandem with Keith's miracle of monitoring a node's health and intelligently making decisions about which clients to relocate and when. Also, while Jane's miracle of detecting and organizing clients into groups of affinity groups based on social networking algorithms is not complete, we also hoped to portray how it should work by rigging up an app that formed affinity groups based on a players location in the game (specifically which chat room it was in).

In the weeks leading up to the conference, Keith and I came up with two demos that for the most part seemed to hit the mark. Using the JMX facilities already built into Darkstar, I hacked up a monitoring GUI that tapped into a running Darkstar cluster and displayed each of the nodes as a vertical bar. The height of the bar represented the total number of clients connected to that node, and the color optionally represented either the node health, or fragments of colors represented the affinity group of each connected client. For the first demo, we had Project Snowman running in a multi-node cluster and showed how the new node health monitoring features of darkstar caused client traffic to spill over and be intelligently distributed between the nodes. Here's a quick video of Keith talking through it on expo floor in Austin:

The second demo had Darkchat running on a multi-node cluster and was designed to show how clients would relocate between nodes depending on which rooms they were connected to in the application. Here's another quick video given by yours truly on the floor at Austin:

Both of these demos we had running throughout the week on the floor and the response from people who came by was generally positive. I think one of the main differences that I noticed between this year and last year was the amount of quality traffic that we had come through the booth. I talked to a lot of people at Austin last year, but a large percentage of them had never heard of Project Darkstar or were just vaguely familiar with it. This year many people came by who were already committed to a project using Darkstar, or were very interested in our progress, or were familiar with the technology and had a strong desire to learn more. I think the best piece of anecdotal evidence was one person from the Intel booth who came by and said "Oh, so this is for real now?" Referring, of course, to the significant headway that we are finally making and showing on the multi-node scaling capabilities of the Project Darkstar platform.

In a tough year for everyone in just about every industry, I think going to this conference and putting together these demos have injected some energy into both the Project Darkstar community and the team. A few more personal observations:
  • The expo floor seemed smaller this year and overall attendance appeared to be down. Not too surprising but hopefully a sign of things past and not things to come.
  • A little tidbit on some of the pains of getting the demos setup. Leading up to the conference, we had everything running just fine in Burlington and the demos packaged up and ready to go. After a few hours of pulling cables, moving pods around, and booting up systems, we fired up the node health demo on Tuesday, the day before the expo floor opened. Except, it didn't work. At least not like it did in Burlington. When simulating an overloaded node, instead of offloading clients onto the other node right away, there was some seemingly random and arbitrary delay of 20 - 30 seconds before any clients would be moved. Huh?
  • After some hours of debugging, and pulling Seth into the mix for help, we finally tracked it down. The (still unfinished) node health code offloads identities from a node when it gets overloaded. However, it doesn't just move client identities, it also moves identities of robots and other NPC's in the system. Since each snowman game has a number of robots, it was choosing to move the robot identities before moving the client identities. The question, of course, is why didn't we ever see this behavior in Burlington? Well it turns out that the order in which identities are chosen to be moved is deterministic and seemingly alphabetical. While in Burlington, our client simulated players were generating identity names according to the hostname of the client machine (dstar1, dstar2, dstarX, ...). The hostnames of the machines used in Austin? x2250-1, x2250-2, etc. So in our Burlington deployment, the client identities were always getting chosen before the robot identities since they started with a d; in our Austin deployment, the client identities were always getting chosen after the robot identities since they started with an x. Unbelievable.
  • Keith gave a talk during a one hour session which was awesome. He went through a number of obstacles and challenges he faced when building Darkchat for JavaOne and I think it came across as very real and genuine.
  • One final note, it appears as though Chuck Norris still doesn't need scalable server technology. All of his CPU's run faster to get away from him. Also, any code that he writes cannot be optimized. For anyone else, though, Project Darkstar could be a solution.

September 6, 2009

Ultimate Injury

I haven't written an ultimate post in a while, but needless to say, it's not because I haven't been playing! However, due to recent events, I may not be playing for at least a week or so. During Friday's lunchtime pickup game, I came down hard on an uneven section of ground and rolled my ankle inwards. I thought I could walk it off and actually kept playing the remainder of the game. That was probably not wise, though, as it didn't take long after the game for my ankle to start swelling up. Diagnosis? Classic sprained ankle (although it's the more rare form, a sprain of the ligaments on the inside of the ankle rather than the outside). I've been icing and ibuprofening all weekend, and the swelling is starting to go down, but I have a feeling it's the 15 day DL for me.

Ultimate Statistics (since January 2008):
Total Games Played: 235
Total Hours Played: 277

September 1, 2009

Adieu Reading Rainbow

I caught a glimpse of this headline the other day: Reading Rainbow Reaches Its Final Chapter. Wow! I had no idea that show was still airing. Not only that, but LeVar Burton was still the host! I remember watching Reading Rainbow when I was in first grade and getting a huge kick out of it because "the guy who plays Geordi La Forge" was the host of the show (and look! he's not wearing a VISOR!). My favorite episode, of course, was the one where they went behind the scenes on the set of Star Trek: TNG.

I'm not sure what's weirder, the fact that I remember this, or the fact that I was a Star Trek geek in the first grade...