Posts in Category: General

My new job at PromptWorks, and thoughts on developer interviews

I’m excited to officially start my new job at PromptWorks next week. The slogan on their website says it all: “we are craftsmen.” If you’ve seen my Clean Code talk, you know what software craftsmanship means to me. An important aspect of it is to keep improving your skills. I’ve been working at PromptWorks on a contract basis for the past several weeks, and I can tell already that I will learn a lot from my new co-workers. They place a strong emphasis on Agile practices, quality, and working at a sustainable pace. I’ve seen enough so far to know that this isn’t just talk, and that their focus is on building long-term relationships with their clients and their staff. They’re also very involved in the local tech community. Among other things, one of them oversees the philly.rb Ruby meetup group.

They’re also supportive of me working remotely while Post to Post Links II error: No post found with slug "living-in-fukuoka-japan-this-summer-and-fall", which is very generous of them (especially for a new hire).

I interviewed with several different companies recently, and for me, the most dreadful part of interviewing is being asked to do live coding. This is sometimes done in the form of a pop-quiz, where I’m presented with some out of the ordinary coding problem, and I’m expected to write code on a whiteboard, or hack together a quick script to solve it. Other times it’s a surprise mini-project I’m expected to do on the spot. Even though I’ve been coding for close to 20 years, and I’ve had plenty of experience doing quality work faster than expected, I’m terrible at these coding exercises.

The issue for me is that they are nothing like doing real work. The only times in my life I’ve had to think up code on the spot for a surprise problem and write it on a whiteboard is in interviews. And in a real job I don’t think I’ve ever had a project dropped on me out of the blue and been asked to code up a solution in an hour or two, with severe consequences if I make a mistake or try to talk to anyone about it.

My thinking process is largely driven by understanding context (the context of the code, and the context of the business problem), and these coding exercises are usually devoid of context. I’ve also trained myself over the years to not just hack things together. I was told in one interview that, sorry, you won’t have time to write tests. Telling me to take my best practices and throw them out the window in an interview strikes me as completely backwards.

How to best interview programmers is a hotly debated topic. Some very respected people, like Joel Spolsky, swear by the whiteboard-coding approach. Others say you’re doing it wrong:

A candidate would come in, usually all dressed up in their best suit and tie, we’d sit down and have a talk. That talk was essentially like an oral exam in college. I would ask them to code algorithms for all the usual cute little CS problems and I’d get answers with wildly varying qualities. Some were shooting their pre-canned answers at me with unreasonable speed. They were prepared for exactly this kind of interview. Others would break under the “pressure”, barely able to continue the interview…

But how did the candidates we selected measure up? The truth is, we got very mixed results. Many of them were average, very few were excellent, and some were absolutely awful fits for their positions. So at best, the interview had no actual effect on the quality of people we were selecting, and I’m afraid that at worst, we may have skewed the scale in favor of the bad ones…

So what should a developer job interview look like then? Simple: eliminate the exam part of the interview altogether. Instead, ask a few open-ended questions that invite your candidates to elaborate about their programming work.

– What’s the last project you worked on at your former employer?
– Tell me about some of your favorite projects.
– What projects are you working on in your spare time?
– What online hacker communities do you participate in?
– Tell me about some (programming/technical) issues that you feel passionately about.

When I became Director of the web team at the Penn School of Medicine, I led an overhaul of how we conducted our interviews, and we adopted questions similar to these. We focused on behavior-description questions, which are actually much more revealing than you might think, if you haven’t tried them before. We also asked for interviewees to bring in a sample of their code, and we’d have them talk us through it in the interview, and answer any questions we had about it. This was an excellent and reliable way to get an understanding of their experience level and getting past shyness and nervousness. For anyone who’s done half-way decent work, they always become animated when showing off work they’re proud of.

For my interview with PromptWorks, they gave me a small project to do on my own time, to turn in a few days later, which is also a good approach. Apart from that, they also had me do a pair programming exercise, which I was worried about at first, but the focus was on getting an understanding of my thought process and overall problem-solving approach, as opposed to how fast I could tear through it, or trying to hit me with “gotcha” questions.

And they hired me, so I must have gotten something right πŸ˜‰

Doug Engelbart passes away

If you use a mouse, hyperlinks, video conferencing, WYSIWYG word processor, multi-window user interface, shared documents, shared database, documents with images & text, keyword search, instant messaging, synchronous collaboration, asynchronous collaboration — thank Doug Engelbart

That quote is from one of Engelbart’s peers. It’s worth taking a few minutes to read the rest of his post, to learn about Doug Engelbart. Personal computing and the internet would not be what they are if it weren’t for his contributions.

About 14 years ago, when Maria and I worked at Stanford, we had dinner with him and his girlfriend, and another couple. He couldn’t have been more pleasant and down to earth. At the time I knew a bit about his history, but not the full extent of his contributions. And I left that dinner still not knowing – he was a modest man. Dave Crocker is someone who worked with him, and he wrote the following last night, after Engelbart’s daughter shared the news of his passing: “Besides the considerable technical contributions of Doug’s project at SRI, theirs was a group that did much to create the open and collaborative tone of the Internet that we’ve come to consider as automatic and natural, but were unusual in those days.”

The San Jose Mercury today re-published a profile of him from 1999:

But the mild-mannered computer scientist who created the computer mouse, windows-style personal computing, hyperlinking–the clickable links used in the World Wide Web–even e-mail and video conferencing, was ridiculed and shunted aside. For much of his career he was treated as a heretic by the industry titans who ultimately made billions off his inventions…

Engelbart is perhaps the most dramatic example of the valley’s habit of forgetting engineers whose brilliance helped build companies–and entire industries. CEOs fail to mention them in corporate press releases; they never become household names. Yet we use their products, or the fruits of their ideas, every day…

“We were doing this for humanity. It would never occur to us to try and cash in on it. That’s still where Doug’s mind is,” explains Rulifson, director of Sun’s Networking and Security Center…

Engelbart’s unwillingness to bend was in evidence when he met Steve Jobs for the first time in the early 1980s. It was 15 years since Engelbart had invented the computer mouse and other critical components for the personal computer, and Jobs was busy integrating them into his Macintosh.

Apple Computer Inc.’s hot-shot founder touted the Macintosh’s capabilities to Engelbart. But instead of applauding Jobs, who was delivering to the masses Engelbart’s new way to work, the father of personal computing was annoyed. In his opinion, Jobs had missed the most important piece of his vision: networking. Engelbart’s 1968 system introduced the idea of networking personal computer workstations so people could solve problems collaboratively. This was the whole point of the revolution.

“I said, ‘It [the Macintosh] is terribly limited. It has no access to anyone else’s documents, to e-mail, to common repositories of information, “‘ recalls Engelbart. “Steve said, ‘All the computing power you need will be on your desk top.”‘

“I told him, ‘But that’s like having an exotic office without a telephone or door.”‘ Jobs ignored Engelbart. And Engelbart was baffled.

We’d been using electronic mail since 1970 [over the government-backed ARPA network, predecessor to the Internet]. But both Apple and Microsoft Corp. ignored the network. You have to ask ‘Why?”‘ He shrugs his shoulders, a practiced gesture after 30 frustrating years…

Here is a set of highlights from his famous 1968 demo of the systems his team developed, showing early versions of computer software and hardware we now consider commonplace. In the 8th video, he shows their online, collaborative document editing system, which looks like an early version of Google Docs. In the 3rd video, he describes the empirical and evolutionary approach they took to their development process. This was another of his ideas that the industry discarded, only to finally re-discover its value, more than 30 years later, as what’s now called Agile development.

The 50 trillion dollar iPhone

Today, at the Agile Testing and BDD Exchange conference, Bob Martin mentioned an article in the EE Times about how microprocessors have changed the world. I looked it up, and the article uses a truly amazing example to make the point. Suppose it’s the late 1940s, and you want to build a device with the computing power of an iPhone. The most sophisticated computer at the time was ENIAC, which was powered by 17,468 vacuum tubes, had about 5 million hand-soldered joints, weighed 27 tons, and occupied 1800 square feet. A single iPhone contains about 100 billion transistors and weighs just under 4 ounces. Building the equivalent back then would have required:

  • Weight: 2,500 Nimitz-class aircraft carriers
  • Volume: 170 Vertical Assembly Buildings (the VAB is at the Kennedy Space Center and is the largest single-story building in the world)
  • Power: over a terawatt, requiring all the output of 500 Olkiluoto power plants (the largest nuclear power plant in the world)
  • Cost: $50 trillion (the economic output of the entire world in 2011 was about $70 trillion)

And now you can put one in your pocket.

Bob went on to point out a fascinating contrast to that exponential advance in computing power: just how little computer programming has changed. Languages have come and gone, but programmers are still writing if statements and while loops. What we think of as modern advances, like object oriented programming, were originally thought up in the 1960s.

Personally, I don’t see this as a problem. Programming languages are languages – they are forms of human expression. The world has changed in many dramatic ways since the time of Shakespeare, but we can read Shakespeare today and still relate to the motives, passions, and failings of the characters. Programming languages exist to communicate a painstaining set of instructions (and therefore aren’t as engaging to read as Shakespeare). But their domain is still that of human expression, for communicating often astonishingly subtle, complex, ever changing, and sometimes seemingly contradictory needs. So, to me, it’s perfectly logical that, while syntax and techniques may be refined over time, the fundamental aspects of programming languages today would be much more familiar to a programmer from the 1950s than the incredibly small and powerful devices in which they now run.

Practical XSLT Examples: Transforming an XML Document to XHTML

Having been involved with only one significant XSLT project using PHP (the PennMed Clinical Trials project), I don’t consider myself an expert. I did run into some issues, however, that required me to go beyond what was available in the online tutorials I found, and to dig into discussion forums, as well as figure out some things on my own. I’ll share some of those experiences here, with practical examples of transforming an XML document to XHTML. This is not a general introduction or tutorial. For that, I recommend the w3schools.com XSLT Tutorial.

  1. Should you perform the transformation on the client side or server side? Unless you have some special reason not to, I recommend transforming on the server side. Why make yourself deal with possible cross-browser compatibility issues in your XSL code, when you can instead have the server do the transformation, and send the browser nice, tidy XHTML instead?
  2. Using PHP’s XSLTProcessor: the PHP portion of the transformation is straightforward. The 7 lines of code in the example on the php.net site is very similar to the code I used in my application.
  3. The XML file: here’s a sample XML file from my project. It’s a document describing a clinical trial. My examples below will come from this document.
  4. Your XSL stylesheet’s outermost template match: everything I read said the outermost template match in your XSL file should be:
    <xsl:template match="/">

    which indicates the root of the document tree, therefore giving you access to all the document’s content. I disagree with this recommendation, at least as far as my project goes. The clinical trials XML documents have all their content contained in a single “clinical_study” tag. Therefore my outermost template match is:

    <xsl:template match="clinical_study">

    This way, I don’t have to repeat “clinical_study/” in every child XSL tag.

  5. Tags that appear only once: it’s vital to fully understand the XML documents you’re processing, so you know which tags might appear multiple times, and whether they have child tags. Tags that appear only once are the easiest to process. Here’s an example of how to display the value of such a tag; this is from a list of eligibility criteria for a clinical trial:
    <li>Gender: <xsl:value-of select="eligibility/gender"/></li>
  6. Tags that appear multiple times, without children: A clinical trial can address one or more medical conditions. They are listed in the XML like this:
    <condition>Metastatic Anaplastic Thyroid Cancer</condition>
    <condition>Metastatic Differentiated Thyroid Cancer</condition>

    Looping through them requires applying a separate xsl template tag. At the point in the XSL stylesheet where we want the conditions to be displayed, we apply the template like this:

    <ul>
    <xsl:apply-templates select="condition"/>
    </ul>

    Then near the end of the XSL stylsheet, after we close the main “clinical_study” template, we define this template:

    <xsl:template match="condition">
        <li><xsl:value-of select="."/></li>
    </xsl:template>

    The “.” indicates that we want to select the value of the tag itself (analagous to a “.” when listing the contents of a directory, which refers to the directory itself).

  7. Tags that appear multiple times, with children: the clinical trials XML documents can have one or more “location” tags (the example here happens to have only one). In our transformation, we want to display the contact information for the studies where the location is the University of Pennsylvania or the Children’s Hospital of Pennsylvania. As before, we indicate the template tag to apply, but this time with a conditional test which I’ll explain below:
    <xsl:if test="location/facility[contains(name,$upenn) or contains(name,$chop)]">
        <h3>Local Contact</h3>
        <xsl:apply-templates select="location" mode="contact"/>
    </xsl:if>

    …And the template:

    <xsl:template match="location" mode="contact">
        <xsl:if test="contains(facility/name, $upenn) or contains(facility/name, $chop)">
            <p>
            <xsl:choose>
              <xsl:when test="contact/last_name">
                <xsl:value-of select="contact/last_name"/>
                <xsl:if test="contact/phone">, <xsl:value-of select="contact/phone"/></xsl:if>
                <xsl:if test="contact/phone_ext"><xsl:text> </xsl:text>x<xsl:value-of select="contact/phone_ext"/></xsl:if>
                <xsl:if test="contact/email">, <a href="mailto:{contact/email}"><xsl:value-of select="contact/email"/></a></xsl:if>
              </xsl:when>
              <xsl:otherwise>
                A local contact person has not been assigned yet.
              </xsl:otherwise>
            </xsl:choose>
            <br />
            <xsl:value-of select="facility/name"/><br />
            <xsl:value-of select="facility/address/city"/>, <xsl:value-of select="facility/address/state"/><xsl:text> </xsl:text><xsl:value-of select="facility/address/zip"/><br />
            </p>
        </xsl:if>
    </xsl:template>

    There’s a lot going on here…

  8. Variable scope: You can define your own variables in XSL:
    <xsl:variable name="chop">Children's Hospital of Philadelphia</xsl:variable>

    It’s important to note that they are scoped tightly. If you define or alter the value of a variable within a loop, that value will be gone when the loop ends. In this case I defined my variables near the top of the document, before the “clinical_study” template, so they are available for use in any template in the stylesheet.

  9. Testing for a condition in multiple tags: The use of XPath predicates allows us to search through all of the “location” tags in the XML document. Note that this:
    <xsl:if test="location/facility[contains(name,$upenn) or contains(name,$chop)]">

    is not equivalent to:

    <xsl:if test="contains(location/facility/name,$upenn) or contains(location/facility/name,$chop)">

    The former searches all the “location” tags in the document for Penn or CHOP, and we’re using it to determine whether we should show the “Local Contact” section. We use code similar to the latter within the “location” template, as we check each location (if we tried to use it in the main clinical_study template, it would check only the first “location” tag in the document).

  10. The template “mode” attribute: in my XSL I need to loop through the “location” tags more than one time, and for more than one purpose. I loop through them once to get contact information, which is what this template is for. I loop through them again later in the stylesheet to extract information on the Investigators leading the trials. For that I have a different “location” template with mode=”investigator”.
  11. Handling quotes: the reason I defined a variable for CHOP instead of running the “contains” test on a plain string is that the XSL processor will throw an error on the apostrophe in “Children’s”. Unlike XHTML, it’s not valid syntax to put single quotes within a double quote delimited string.
  12. Referencing XML values within an XHTML tag: To get the contact person’s email address in a “mailto” link, we delimit the value in curly braces – <a href="mailto:{contact/email}">. The curly braces extract the value of the tag.
  13. Adding spaces: ther XSL parser aggressively strips spaces. It will honor spaces between words in your stylesheet, but it will strip spaces between tags. To force a space, use <xsl:text> </xsl:text> (this one took a while for me to track down, as discussion forum posts I found on this topic focused on using entities as the solution, but that is not an elegant approach, as &nbsp; is not a native XML entity, and this wouldn’t be a semantically correct use of it anyway). The parser doesn’t do this just to be annoying. If you were using it to create, for example, a PDF document, you would be glad it aggressively strips spaces, as stray spaces could cause major headaches in that context.

CSS and the Limits of Definition Lists

I’ve become a fan of definition lists as a layout tool. Here’s a snippet of HTML, using them to markup a form. You can make the input element label the definition term (dt) and the input element itself the definition data (dd), like so:

<dl>
<dt><label for="first_name">First Name<label></dt>
<dd><input type="text" name="first_name" id="first_name" size="20" /></dd>
<dt><label for="last_name">Last Name<label></dt>
<dd><input type="text" name="last_name" id="last_name" size="20" /></dd>
etc...
</dl>

What makes this better than using an HTML table is that with CSS you can specify whatever layout you want: you can style it so the dt is above, below, to the right, or to the left of its dd partner. This is particularly helpful when you’re writing re-usable code that might be needed in situations where you can’t predict the layout needs.

But tonight I discovered the limit of this approach, which is when you can’t predict whether the vertical height of the dt’s content will exceed the height of the dd’s. I’ve been working on the next version of Shashin, and what’s been driving the effort is the Boxing Dragons art gallery site, which I just finished working on. I’m using Shashin to display a list of albums (in this case artists) with the description of the album alongside the cover image of the album. For the next release of Shashin, I was originally planning to do this markup with definition lists (with the album cover as the dt and the description as the dd), giving users the flexibility to layout their album and description pairs any way they want, via CSS.

This approach to styling definition lists is fairly tidy, and works fine when the height of the dd is the same or greater than the dt. And in Firefox it also works when the height of the dt is greater, but not so in IE6 (I’ve been resisting upgrading so I haven’t tested with IE7). In IE6 the content of a dd will “flow up” into any available space above it, pushing a dd’s content higher than the position of its dt partner.

The clearfix solution for positioning floating divs doesn’t help here (believe me, I tried). I found several threads of people discussing this problem (or something quite similar to it) but no reliable solutions. The best I found was this admirable effort, but it entails about 70 lines of CSS code as well as some goofy markup. It’s also quite fragile – as soon as I started tweaking things like margins even slightly, it would start to fall apart. Although I probably could have gotten the layout I wanted if I kept at it, the CSS would have been so complex it would have defeated the purpose: to make it fairly easy for Shashin users to alter the stylesheet to get the layout they want.

So, at least for now, I’ve given up on using a definition list and have retreated to using a table. The upside is the markup and the CSS are straightforward and cross-browser compatible. The downside for Shashin is that there’s no flexibility: the album covers have to stay on the left, and the descriptions on the right.

“Less Is More” Theme, with CSS Drop Down Menus

New design, with drop down menus

New design, with drop down menus

 

If you do a Google search for “CSS drop down menu”, you’ll find a number of examples that have been provided by well meaning folks. I wasted a lot of time with them. With only one exception, they were either:

  1. Poorly modularized, in that if I included their stylesheet and javascript files, and then dropped their menu markup inside a div in my design, my page would explode into a million pieces, or
  2. They relied on 100+ lines of javascript, which seems really unnecessary in the age of CSS (except for IE’s lack of CSS support for hovering with anything other than an anchor tag), or
  3. If I scrolled through the submenu items, the hover color on the top menu item would disappear, resulting in a goofy menu display (that’s a problem most don’t know how to solve without javascript though, including me).

The one exception was the CSS Express Drop Down Menu, which was the seventh or eighth one I tried. It has only one small javascript function (to patch the IE hover problem), the xhtml and css aren’t unnecessarily complicated, and the css is very well documented. It even includes special handling for the notorious IE5 for Mac. After dropping in the code, I just had to spend about 20 minutes tweaking the css for fonts, colors, and padding to fit my design, and now I’m good to go. If you’re looking for a good CSS drop down menu, this is the one to use!

English Windows XP with a Japanese Keyboard

It would have been much more difficult for me to figure out how to setup my Japanese keyboard without the help of the articles, blog posts, and forum posts that others wrote describing their experiences. I figured out a few things that no one else has written about, so the purpose of this post is to give something back to the community of folks who have also struggled with using Japanese in Windows.

I decided to try my luck using a 109 key Japanese keyboard with my English Windows laptop. I thought it might help my Japanese writing if I learned to use the direct Hiragana and Katakana input, instead of typing in Romaji and relying on MS Word to do the conversions for me. I succeeded in getting everything working, but it took some doing.

The place to start is the excellent article Windows XP Japanese Input. As thorough as that article is, it wasn’t quite enough to get my keyboard working correctly. So the next step is Cameron Beccario’s instructions for installing a Japanese keyboard. My keyboard is USB, but the only driver option available for a Japanese keyboard is PS/2. I picked that anyway and it’s working fine. But that only gets the driver in place – you still need to do some configuration work:

  • Under Control Panel / Regional and Language Settings / Language Settings / Details, I added “English (United States) – Japanese” as the default input language. You do this by going into the “Installed Services” box, and in the “English” section under “Keyboards” click “Add.” Then in the next window, select English as the input language and Japanese as the keyboard layout. After you click “OK”, this should make “Japanese” appear in bold under “Keyboards” in the “Installed Services” box, meaning it’s the default keyboard layout. You need this setting in order for the keys on the Japanese keyboard to map correctly. If you don’t do this, the Japanese keyboard will still work, but the keys will be mapped to a US keyboard layout (which means, for example, you’ll get an @ symbol when you try to enter a ").
    Windows XP language settings for a Japanese keyboard

    Windows XP language settings for a Japanese keyboard

     
  • With the foregoing setup, if you use the language bar to – for example, switch Microsoft Word to Japanese – you can make the appropriate selections in the Language Bar, type Romaji, and Word will convert it to Hiragana just as it would with a US keyboard. If you want to set it up so that you can simply type the Hiragana as it appears on the Japanese keyboard, then in the Language Bar, select Input Style / Properties, and in the General tab change the input method to Kana.

Some other things worth noting:

  • Under Control Panel / Regional and Language Settings / Advanced, I left English as the language for non-Unicode programs. As explained in the article, setting it to Japanese will cause the \ character to appear as Β₯ (the yen symbol) and this setting can cause some programs to automatically install themselves in Japanese. And personally, even though there’s no harm in it, seeing yen symbols where backslashes should be in file paths would drive me crazy.
  • At least with my keyboard and MS Word, the Β₯ will give you a Β₯ only if you’re in Romaji input mode (and if you hit it twice, it’ll give you a double backslash). If you switch to Kana input mode, then you can’t get a Β₯ from it all – it instead gives you the Katakana vowel extender character (which looks like a stylized em dash).
  • In the Kana input mode, you can make use of the 4 special Japanese language keys on the keyboard. A found a nice description of them on this Keyboard scancodes page:

    To the left of the spacebar, (Shift-JIS) 焑倉換 (muhenkan) means no conversion from kana to kanji. To the right of the spacebar, 倉換 (henkan) means conversion from kana to kanji. In Microsoft systems it converts the most recently input sequence of kana to the system’s first guess at a string of kanji/kana/etc. with the correct pronunciation and a guess at the meaning. Repeated keypresses change it to other possible guesses which are either less common or less recently used, depending on the situation. The shifted version of this key is ε‰δΎ―θ£œ (zenkouho) which means “previous candidate” — “zen” means “previous”, while “kouho” means “candidate” (explanation courtesy of NIIBE Yutaka) — it rotates back to earlier guesses for kanji conversion. The alt version of this key is ε…¨δΎ―θ£œ also pronounced (zenkouho), which means “all candidates” — here, “zen” means “all” — it displays a menu of all known guesses. I never use the latter two functions of the key, because after pushing the henkan key about three times and not getting the desired guess, it displays a menu of all known guesses anyway.

    Next on the right, γ²γ‚‰γŒγͺ (hiragana) means that phonetic input uses one conventional Japanese phonetic alphabet, which of course can be converted to kanji by pressing the henkan key later. The shifted version is γ‚«γ‚Ώγ‚«γƒŠ (katakana) which means the other Japanese phonetic alphabet, and the alt version is γƒ­γƒΌγƒžε­— (ro-maji) which means the Roman alphabet.

    Near the upper left, 半/ε…¨ (han/zen) means switch between hankaku (half-size, the same size as an ASCII character) and zenkaku (full-size, since the amount of space occupied by a kanji is approximately a square, twice as fat as an ASCII character). It only affects katakana and a few other characters (for example there’s a full-width copy of each ASCII character in addition to the single-byte half-width encodings). The alt version of this is ζΌ’ε­— (kanji) which actually causes typed Roman phonetic keys to be displayed as Japanese phonetic kana (either hiragana or katakana depending on one of the other keys described above) and doesn’t cause conversion to kanji.

  • It took me a while to figure out the diacritical marks when in Kana input mode, but I finally got it. For example, to make a た (ta) into a だ (da), you hit the た key, and then the γ‚› key (the @ key when in English mode), and then Word will merge them into a single character.
  • I have the keyboard hooked up to a laptop which has its own regular US keyboard. There is no way that I know of to have dual keyboard configurations. So this means the laptop keyboard defaults to behaving like a Japanese keyboard, resulting in a a number of keys not mapping correctly. I found this isn’t so bad, as you can toggle between the keyboard layouts in the Language Bar (but you just need to remember the Language Bar settings are per program, so you need to toggle each program; and, of course, you can always change the default keyboard layout back to US English).
  • I also discovered that all the Regional and Language Bar settings are per user. So you need to go through all of these steps (except for the driver installation) for each account used on your PC πŸ™ (I imagine this can be dealt with at the Administrator level, but I haven’t checked).

I’m a fairly fast typist, and it’s taken about a week to retrain my fingers for some of the different key positions. The hardest thing to get used to is the teeny tiny space bar (it’s only about twice the width of a regular key). Some of the layout reminds me of my old Commodore 64 – double quote is Shift-2, @ has its own key, etc.

ENIAC’s 60th Anniversary

Yesterday was the 60th anniversary of the creation of ENIAC, the world’s first all-electronic computer, here at U Penn. An interview with Presper Eckert, one of its co-inventor’s, was recently published on the ComputerWorld site. I was fascinated by his description of the Harvard Mark 1, ENIAC’s mechanical predecessor:

It could solve linear differential equations, but only linear equations. It had a long framework divided into sections with a couple dozen shafts buried through it. You could put different gears on the shafts using screwdrivers and hammers and it had “integrators,” that gave [the] product of two shafts coming in on a third shaft coming out. By picking the right gear ratio you should get the right constants in the equation. We used published tables to pick the gear ratios to get whatever number you wanted. The limit on accuracy of this machine was the slippage of the mechanical wheels on the integrator.

And about ENIAC itself:

The ENIAC was the first electronic digital computer and could add those two 10-digit numbers in .00002 seconds — that’s 50,000 times faster than a human, 20,000 times faster than a calculator and 1,500 times faster than the Mark 1. For specialized scientific calculations it was even faster… ENIAC could do three-dimensional, second-order differential equations. We were calculating trajectory tables for the war effort. In those days the trajectory tables were calculated by hundreds of people operating desk calculators — people who were called computers. So the machine that does that work was called a computer… ENIAC had 18,000 vacuum tubes… The radio has only five or six tubes, and television sets have up to 30.

He also mentioned that back then Philadelphia was “Vacuum Tube Valley.” My neighbor, a man in his 70s, told me he use to work on re-entry systems in an office on Walnut St. I asked if he meant programs for people re-entering the work force. “No,” he said “I worked for GE, designing re-entry systems for astronauts in spaceships.” It seems that little of this technological legacy remains here. Penn’s school of engineering isn’t what it used to be (Penn’s schools of business, architecture, communications, medicine, nursing and veterinary medicine are all top 5 schools, but engineering ranks 27th). And while there are Lockheed-Martin offices and pharmeceutical companies scattered around the tri-state area, and Drexel is a good engineering school, I don’t get any sense that the city of Philadelphia does anything to capitalize on its remaining engineering and technology assets.

AMCAS Moved My Cheese

Last week was the culmination of my work so far here at Penn. I was hired to overhaul the Med School’s web-based admissions tools. Over the past year and half I’ve written over 32,000 lines of code for this project. That means there are a lot of moving parts. The more moving parts you have, the more features you can offer. On the downside, every moving part you add introduces another possibility for something to go wrong. In a post about a year ago I explained the home-grown development tools we use for UI development and database access (since then we made the unfortunate choice of renaming our “LDL” database access tool to “the API”). With the admissions project I added to this toolset, introducing the concept of “data objects” (I called them that to distinguish them from the UI objects my coworkers were already familiar with). Here’s a presentation I made about a year ago if you want to know the gory details. But the basic point is that, to minimize the potential for chaos, confusion, and things generally going wrong when you have so much code, I went with an object oriented design for managing and manipulating the data (done properly, this gives you clearly defined containers for your data and functionality, and provides a set of unambiguous “touch points” between all the moving parts). Last week we launched the new tools for the applicants for the 2006 class, and I’m told it’s been the smoothest launch since the Med School first moved the process online four years ago.

Fun with Milk and Cheese

That might not be quite the achievement it sounds like, as they really had nowhere to go but up. That’s not the Med School’s fault though. When someone applies to medical school, they don’t apply directly to the school. They send their application to AMCAS (the American Medical College Application Service), and it’s up to AMCAS to get the application data to the schools where the applicants want to be considered. When AMCAS moved to doing this electronically several years ago, many of the med schools were nervous, so AMCAS tried to cajole them into feeling better about it with the Who Moved My Cheese? approach. Then when their new electronic system went live, it was a total disaster. Which goes to show that sometimes fear is a perfectly rational response to change. In the years since then they’ve improved their system, so there haven’t been any repeats of what happened the first time, but it takes time to rebuild trust after an experience like that. I was rewarded the other day with a t-shirt saying “AMCAS moved my cheese.” I’m amazed they’ve stuck with that slogan.

Microsoft’s Strange Relationship with the English Language

Fortunately we don’t use much Microsoft software at my job. But we do have one vendor-dependent application that requires us to use SQL Server. I needed to add a column to a table indicating when a record was modified. So I dutifully went to Microsoft’s MSDN site to learn how this is done in SQL Server. I came across the “timestamp” data type. “Hmmm,” I foolishly thought, “maybe this will help me with creating a time stamp.” But no, the documentation says: “The SQL Server timestamp data type has nothing to do with times or dates.” It’s actually a sequential record modification marker that’s useful in data recovery, but it has “…no relationship to time.”

I guess this is the kind of stuff people have to spend their time learning when they go for Microsoft Certification.