Month: July 2010

An Eating Day at the Beach

Oh, Sun Diego, shall I count how many ways I love thee? Well, it’s really just one: I can go to the beach any time of the year. Really, I live and work close to the beach and frequently I’ll just head out for lunch, or in the late afternoon.

What if you wan to spend the whole day at the beach, though? What are the challenges of a day in the sun from a dietary perspective?

First, the most important one: sunshine, surf, and all the activity at the beach makes very thirsty. Drinks are very heavy. Nobody wants to shlep around gallons of fluids. Problem.

Advice: never ever waste fluid space on caloric drinks. Your body is going to demand water primarily, and any calories you add to the mix are empty. They get into your body and stay there, just because you are thirsty. So, despite the attractiveness, avoid the six-pack of beer and the sugary sodas at all costs. Stay away as much as you can from fruit juices, too. You want water, water, water.


Rethinking Traditions

A confession: I don’t like signing emails. I find it stupid. You know the message is from me, after all your email client tells you that before you open it. What’s the point of salutation and signing? What does, Sincerely, Cinserely tell you that you didn’t already know?

Turns out there was a good reason for the signing. That’s from the days of snail mail and before there was such a thing as a typewriter: the presence of a signature was the only certain way to know who wrote you a letter. I remember the days when you’d get one, and you’d turn it over to read who sent it. If I had known I would feel old just for admitting I had ever read a hand-written letter, I would have believed everything science fiction told me.

But now I write emails, and I got used to doing a lot of things that you couldn’t really do with paper. For instance, I reply inline – breaking up long messages and replying to a question right beneath it. Or I make creative use of the Subject: header. Of BCC: myself to have a record of sending the message.

More often than not, I won’t sign an email. It’s not that I forget, and it’s not that I am too lazy. It’s that I find that a truly pointless activity. I will typically close my message with a friendly greeting to the family or coworkers, or a wish for something fun, but rarely sign. It’s a tradition we keep on, just because we don’t think about it.
Since not thinking is not my forte, I checked what else I can re-think. Then I realized there are quite a few things we do that are just plain stupid and dangerous – but we keep on going because switching would be dangerous. My favorite example, since it hit me once, are cars. In particular, the location of the steering wheel.

You see, the steering wheel is located on the left in right-driving countries. Which means that whenever people get into a car, at least one person has to get in on the left side, the traffic side. That’s extremely dangerous, both for the driver and traffic. My accident happened in San Francisco, on Ocean Boulevard, when a driver slammed open his truck door while I was next to him, wedged between parking cars and flowing traffic on one of those bike lanes of death. The door was heavy enough that it threw me into traffic. I could have been easily dead, but got “lucky” enough to just not be able to walk for a while.

There were multiple culprits for my accident, in particular bike lane design. The problem of car doors, though, is acute regardless of my personal history. Every day, people get injured by car doors opened at the wrong moment, or by having to stand in traffic while getting into the car. Why did we do something stupid like that?

Turns out the reason is chivalry. When cars came out, they were exceedingly rare and typically not used by a single person. Typically, the whole family would drive around, father at the steering wheel and mother on the passenger side. When cars (automobiles) became more common, there was a sense that the father should do the “right” thing and sit on the dangerous side. The mother, on the other hand, shouldn’t be forced to walk on the dirty street with her skirt and should be able to get out in comfort and without risk.

To this day, it is considered polite in the Old World for a man to walk on the street side of a sidewalk, keeping the lady on the far side. Just as it is considered polite to walk behind a lady going up a staircase, but in front when going down. That’s from the days ladies fainted a lot – the aim was to always be down the stairs to catch a falling lady.

Fast forward hundred years. Cars are everywhere, women are no more likely to sit on the passenger side than men, they are probably even less likely than men to wear a skirt. Most people, indeed, drive their own car by themselves, unless they are dropping off kids somewhere. In 2010, it is plain stupid to have the only person to get into and out of a car do so on the traffic side.

We don’t change that for multiple reasons. First, it’s always dangerous to change something that people have grown accustomed to (as any place that switches driving direction finds out). Second, it’s expensive to change that (as any car manufacturer finds out when selling to a country with different direction). Third, there is no awareness of the problem (as I found out researching this post). Fourth, we are effing lazy, people!

But, really, it’s all over the place. Wherever we do something out of unthinking tradition, we should really go back and look again. Computer people and writers, for instance, should seriously look at Dvorak keyboards. As many know, the current keyboard was designed with the specific purpose of slowing down typers (which it does very efficiently). Switching to a Dvorak keyboard designed for maximum speed and accuracy in typing would be good.

The world is full of examples of tradition that went haywire. The Internet, of all places, is a veritable collection of weird habits, despite being so young. SPAM, for instance, is an artifact of the days when there were few computers and everybody trusted everyone else. It would be dead easy to fix, but we don’t.

HTTP and HTML are a horrible combination for today’s needs. They were wonderful when they were invented, but we don’t have the problems they were trying to solve, and they couldn’t possibly address the issues we have now. As an example of the former, the stateless nature of HTTP is excellent in systems that have unreliable connections, a problem that only mobile phones have these days, and not for long anymore. As an example of the latter, the original HTML spec didn’t include any specific way to embed videos, which were quite a stretch back then (think dial-up speed and YouTube), but are some of the heaviest use today.

Share your ideas! What are some of your favorite examples of tradition gone wrong, and how would you fix them?

Read more:

Is USB the New Outlet?

I just got a refurbished Garmin Edge 705 (review to follow) along with a bunch of craptastic free sample gadget from China. The thing they had in common? Instead of having dedicated chargers, they all came with an AC/DC converter to USB.

It used to be that you were flooded with dozens of different chargers. After a while, you’d forget which charger belonged to which gadget and you had to label them. Every time you’d move, then, you’d have this monstrous mess of chargers that you couldn’t get rid of, because you had long forgotten what gadgets they belonged to.

There were craptastic adaptable chargers with dip switches for current and voltage and different tips. You constantly risked destroying your gadget by choosing the wrong voltage, so they were really just an item of last resort. It was a nightmare.
So, now it’s USB. I love that. Especially because some converters are better than others, so you can just drop the crappy ones and use the good ones all the time. I particularly like the Kindle converter: it’s tiny and doesn’t protrude into neighboring outlets. (The Garmin one is so huge, I didn’t even take it out of the box.)

Frankly, it’s over hundred years after AC won over DC, and we start using less and less AC and more and more DC. Household appliances still use AC, and well as most power hogs (like the vacuum cleaner), but all smaller devices use some form of DC. Certainly all forms of electronic do – from computers to gadgets.

Why not standardize on DC inside the house, then? Why not convert your outlets to DC, replacing the two pig snouts with 8 USB ports? No real reason, really. And imagine how much better your life would be if you could run your netbook directly from USB!

[Note: I haven’t read the USB spec, so I don’t know what power throughput it calls for – a full-blown laptop might just be out of range.]

Imagine finally being able to just go to any outlet and plugging in your iPhone or iPod or whatever – no need for a charger!

Turns out there is a converter kit for current outlets that adds two USB ports (a little anemic, but better than nothing). I, for one, welcome this amazing change. No more running out of juice at the airport. No more dealing with chargers heavier and bulkier than the gadget they are meant to feed!

Read more:

YHIHF: Do Internet Shopping Companies Ship by Access?

It’s been a few months now that I noticed something odd going on. Whenever I buy something on certain online shopping sites, nothing happens for a while. Then, when I log on to check what’s happening with my order, mysteriously it ships on the same day.

It all started with this deal site. I ordered a refurbished computer, and while the site stated they were shipping within two days, after four I had no email confirmation, no tracking number, nothing. When I went to the site, it said it was undergoing construction, and that sent me into Scam Prevention Mode. I sent them an email demanding an update and alerted my credit card company.

Next morning, UPS faithfully delivered my computer. Overnight shipping. At that point, I thought they had just somehow messed up and wanted to make up for it.

Then I started noticing it with other sites. At first, it was an argument with a big online vendor: I checked their Super-Saver Shipping box for my new Acer laptop, and they sat on the order for a week. When I told them their policy was that free shipping allows them to perform a slow shipping, not a slow fulfillment process, they gave me drone talk for a while, but then made it happen. (Coincidentally, the laptop ended up arriving the day before I unexpectedly had to fly to Germany for a funeral.)

Then it was other stuff, and more and more consistently. I’d buy something, and shipping would be slow. I’d typically just have to go online and check, and suddenly there was progress. The only frequent shipper that I use that consistently kept reliable and speedy delivery times is Vitacost. (No wonder they are doing so well!)

I thought about it, and if you have limited inventory – or even on-time inventory – then this approach works quite well. Until someone goes online and checks on status, you can assume they are not too worried. When they check, you get an idea they started wondering what’s going on, which means it’s time for you to show progress. A little like the employee that comes to work five minutes before the boss so that (s)he looks like always at work. Only that here, it’s five minutes after the boss.

The problem with this approach is that it becomes too obvious, and hence the manipulation became manipulatable. It also leaves a very bad impression. Do you remember when it was shown that a big online retailer (I’d give the name, but I can’t find a news article to link) showed higher prices to its own registered users? Well, it was a very stupid move, because it was easily circumvented (just nuke all Amazon cookies), and it left the loyal users out in the cold.

Same thing in this case. Now I learned my lesson and will follow up daily on shipments. It annoys me that I have to do that, which means that I am less likely to buy from sites that I suspect follow this kind of user tracking, and I am looking forward to the first shopping sites that positively proclaim they don’t do that.

Read more:

The Revenge of the Bland

American cuisine focuses on the bland. I don’t know what it is, but apple pie, turkey, mashed potatoes, hamburgers, steak, casseroles, and what you have tend to be light on the spice, measured in the flavor, balanced. No Poblano peppers, no extravagant ginger, sensuous garlic, taste-explosive mole. American cuisine, at least the fast food version, has conquered the world because it’s not offensive to anyone.

The downside: you can’t control how much you eat. Popcorn is bland – so bland that you can just stuff it down your mouth until well after feeling sick, because why not? Milk chocolate is bland – and have you tried putting down that bag of M&Ms? French fries are all crunch, salt, carbs, and fat – bland and unstoppable. Let’s not even talk about cereals, mac&cheese, or marshmallows.

Things with lots of flavor – good or bad – reach saturation faster. Try eating a pound of dark chocolate M&Ms (really, don’t), or a dozen mole tamales. You would have a really hard time. Too much flavor.

So, I thought to myself, why don’t you add some flavor to the foods you eat, and see whether you can eat less? Lo and behold, it worked (for me)! Try adding ginger to apple sauce, or wasabi to your French fries, or dark chocolate on your Graham crackers. It works (for me)!

Read more:

Comparing eBook Readers

The local Best Buy has a display with different eBook readers, so I got a chance to hold them all in hand and compare them. Nice way to entice customers, by the way!

Price wars are all the rage right now. The new Nook reader came out, and the price dropped to $149 (no 3G). Amazon followed suit and dropped the price of the Kindle to $189 (with 3G, compared with $199 for the nook 3G). Sony interestingly still sells ebook readers, and those were on display, too.

Admittedly, considering that the device is not the major buying factor for either Barnes & Noble or Amazon, the price of the device is still way too high. At the very least, the companies should offer discounts for buyers of the respective devices to cover their cost, since you are buying something that ties you to a particular vendor (still).

Of the devices on display, the Nook clearly had the upper hand. The Sony readers were quite nice, but the decision to put the touchscreen on top of the e-ink dims the latter considerably, giving the whole reading area a washed out, grey-ish look. The price is right, though, and Sony e-readers are not affected by the usual Sony price inflation.

My favorite thing about the Nook is clearly the color display at the bottom. It’s not an ideal solution – a full touchscreen device would be much more to my liking – but the separate display has a much better response time than e-ink, making it possible to interact with the device in a pace that is not the sluggish, crawling, boring back-and-forth that is typical of the Kindle.

I absolutely hate the Kindle keyboard. It’s cumbersome, it’s unresponsive, and it takes too much space. The Nook’s pop-up on-screen keyboard has the same main issue as the Kindle’s physical keyboard (you cannot type blindly), but at least with the Nook you get the benefit of fast response and multi-use. In the end, I never use the Kindle keyboard because it’s just too bad. Placing notes on the screen is atrociously horrible, with the need to joystick your way to the point of entry and then punching the keys to add the note. With the Nook, at least entering a note is quick, and you get the benefit of using the display for other things.

As far as book choices are concerned, the Kindle loses all the way. Sure, you get the largest selection of ebooks, but the Nook and B&N are not far behind. More importantly, though, the Kindle doesn’t support PDF natively, which severely restricts you in the books you can read. The choice of not supporting PDF is entirely political: the innards of all ebook readers on the market is virtually identical, and adding PDF support is trivial. The fact Amazon won’t let that happens says a lot about the company and its aim.

Why is PDF so important? Mostly because if you want to throw out your book collection and use the reader as your main device for, err… reading, you need to be able to take all main book formats on it. Most things readable on this planet come in one of these forms:

  • ebook – whichever proprietary format. Advantages are that the formatting is designed to flow on your screen and can be modified depending on font choices
  • text – ASCII or Unicode. Advantages are as in ebook, disadvantage is that you are limited to text (and ASCII art)
  • HTML – advantage is mostly that it’s everywhere and that it (theoretically) allows for the embedding of images; the downside is that it’s bloated and that it allows for tons of content that you cannot really display in an ebook reader (like the dreadful Flash animations)
  • PDF – advantage is that it’s universal and precise; disadvantage is that it is designed for one particular device (usually an 8×11 page) and that it can’t be reformatted for a different screen

PDF has the enormous plus of being a target of conversion for all types mentioned. Any file you read can be made into a PDF file – worst case scenario by printing to a PDF file.

Now, the Kindle kinda supports PDF – by sending a PDF file to Amazon, which converts it to its own format and then sends it back to you. That’s not too terrible, and you get the advantage of having a file that re-flows. The downside is that you have to go through the extra step of emailing a file, which is both annoying from a time perspective and worrisome from a content perspective – who knows what Amazon does with the file?

I did like the Nook, as you may have noticed. It comes with a few interesting extras, like free WiFi at B&N stores and the ability to loan a book to a friend – a major downside of the Kindle being that you cannot give your books away.

Well, I loved my Amazon customer service, but I think the device is clearly not the end of the line. Additionally, despite the crappy screen resolution for a device that size, the iPad is making inroads in the ebook market, there is a whole slew of Android tablets that is about to pop into the market, and Pixel Qi just released their brand new dual e-ink/color display to eager users.

It’s going to be an exciting time in the ebook market, and right now Amazon is nowhere near being the winner.

Read more:

The Role and Importance of Quality Assurance (QA)

There is a moment where the young and enthusiastic learn that seat-off-the-pants is quick but eventually leads to catastrophe. You can tell at which stage engineers are by asking them what they think of QA: if they think it’s an occupation for the lesser divinities of programming, they aren’t there yet; if they have enough experience, they will think of QA engineers as demi-gods whose verdict makes and breaks months of coding.

Having been on for decades, I am of course a very, very strong proponent of mandatory QA. To me, this last step in the development process fulfills three main goals:

  1. Interface stability and security Making sure that the code does what it is supposed to do especially in boundary conditions that developers typically overlook. The most common scenario is that of empty data (null pointers, etc.) somewhere code assumes there to be an object, but testing code for SQL injections is another, perfectly invaluable example. This has nothing to do with the functionality of the code, but with its ability to behave properly in unusual conditions.
  2. Performance and stress testing Checking how the code behaves under realistic scenarios and not in the simple case the developer faces. Instead of 5 users, make 500,000 run concurrently on the software and see what it does. Instead of 100 messages, see what the system does with 100,000,000. Instead of running on a souped up developer machine with a 25″ display, look at your software from the point of view of a user with a $200 netbook.
  3. User experience and acceptance Ensuring the flows make sense from the end use perspective. Feel yourself into the user and try performing some common tasks. See what happens if you try doing something normal, but atypical. For instance, try adding an extension to a phone number and see whether the software rejects the input. 

We have gone a long way towards understanding how these three goals help the development process. What is just as important, though, is to see how they (a) have to be implemented, and (b) what the downsides are of not implementing them.


The modern trend is towards implementing interface tests at the developer level. The basic idea is that there is a contract between developers, and that each developer has to write a series of tests that verify the code they wrote actually performs as intended. The upside is that the code will do as desired and that it is fairly easy to verify what kind of input is tested. The downside is that the testing code almost doubles the amount of programming that needs to be done.

Agile methods, with their quick iterations, are particularly emphatic about code testing. Each developer is required to provide testing code to the tune of the main deliverable. At first, it seems odd that people willing to throw out code regularly would be so adamant about testing it. Closer inspection, though, shows that if there is no complete set of tests, the time saved by not implementing them is paid by having to find and remove inconsistencies and incompatible assumptions.

Stress and performance tests usually have to be separated from the interface tests, because they require a complex setup. Performing a pure stress test without a solid data set leads to false negatives (you think the code is OK, but as soon as the real data is handled, it breaks where you didn’t think it would). A good QA department will have procedures to create a data set that is compatible with the production data and will test against it.

There are two goals to this kind of test: (a) characterization and (b) profiling. Characterization tells the department how the code performs as load increases. Load is a function of many factors (e.g. size of database, number of concurrent users, rate of page hits, usage mix) and a good QA department will analyze a series of these factors to determine a combined breaking point – a limit beyond which the software either doesn’t function anymore or doesn’t perform sufficiently well.

Profiling, on the other hand, helps the developers. The software profile gives the developers an idea where the code breaks down. Ironic, considering that a software profile is a break down of where the processors spent time. Profiling needs very active interaction between QA and development, but is a very powerful tool for both.

Finally, user acceptance tests are performed by domain experts or using scripts provided by domain experts. This is the most delicate of function of QA, because the testers become advocates or stand-ins for users. In this capacity, they test how the software “feels”. They have to find a grasp of what the user will ultimately think when faced with the software.

It is here that the tension between developers and testers gets worst. The attitude of many developers is that the software performs as intended and they are frequently upset when a tester complains about something immaterial that forces them to do a lot of work for something that seems minor, like splitting a page in two or reversing the logic with which information is gathered.

It is also here that the engineering manager has to be most adamant and supportive of the testers. Ultimately, the users will perform the same tasks for many, many times. To them, an extra click may translate to wasted hours on a daily basis, something that would infuriate anyone.

Not Implementation

What is the downside of not implementing Quality Assurance? If you are a cash-strapped, resource-strapped Internet startup, the cruel logic of time and money almost forces you to do without things like QA, regardless of the consequences. So let’s look at what happens when you don’t follow best practices.

First, you can easily do without unit tests in the beginning. I know, you wouldn’t have expected to hear that from me, but as long as your application is in flux and the number of developers is small, unit tests are very inefficient. You see, the more you change the way your application operates, the more you are likely to have to toss your unit tests overboard. On the other side, the fewer developers you have, the less they are going to have to use each other’s code.

Problems start occurring later on, and you certainly want to have unit tests in place after your first beefy release. What I like to do is to schedule unit test writing for the time period right after the first beta release – the time allocated to development is near nothing, and you don’t want to push the developers onto the next release, potentially causing all sorts of issues with development environments out of sync. So it’s a good time to fill in the test harnesses and write testing code. Since the developers already know what features are upcoming, they will tend to write tests that will still function after the next release.

Second, performance tests are a must before the very first public release. As an architect, I have noted how frequently the best architecture is maligned because of a stupid implementation mistake that manifests itself only under heavy load. You address the mistake, fix the issue, and everything works fine – but there is a period of time between discovery and fix that throws you off.

Performance and scalability problems are very hard to catch and extremely easy to create. The only real way to be proactive about them is to do performance and load testing, and you should really have a test environment in place before anything goes public.

There are loads of software solutions that allow you to emulate browser behavior, pretending to be thousands or millions of users. Some of them are free and open source, many are for-pay and extremely expensive. Typically, the high-end solutions are for non-technical people, while the open source solutions are designed by and for developers.

Finally, lack of final acceptance testing will have consequences mostly if your organization is not able to cope quickly with user feedback. In an ideal world, you would release incremental patches on a frequent basis (say, weekly). Then you can take actual user input and modify the application accordingly.

The discipline required to do this, though, is a little beyond most development shops. Instead, most teams prefer to focus on the next release once one is out the door, and fixing bugs on an ongoing basis is nobody’s idea of a fun time. So you are much better off putting in some sort of gateway function that has a final say in overall product quality.

Many engineering teams have a formal role of sign off. Unless the responsible person in the QA department states that the software is ready for consumption, it isn’t shipped. I found that to be too constricting, especially because of the peculiar form of tunnel vision that is typical of QA: since all they ever see of the software is bugs, they always tend to think of the software as buggy.

Instead, I think it more useful to have a vote on the quality and release: in a meeting chaired by the responsible person in QA, the current state of the release is discussed and then a formal vote is taken, whose modality is known ahead of time. Who gets to vote, with what weight, and based on what information – that’s up to you. But putting the weight of the decision on the shoulders of a person that has no responsibility but detecting issues is unfair.

Read more: