Month: June 2010

MeeGo to Become Default OS for Nokia Smartphones?

To my complete surprise, I found an article on Slashdot this morning, in which Reuters was quoted as saying that Nokia is going to ditch Symbian on the N series of phones. Instead of the trusted in-house OS, Nokia is moving their flagship product to MeeGo.

For those of you who don’t know, MeeGo is the merger or Nokia’s very on Maemo project and Intel’s Moblin. All three projects have the same aim: to provide a full Linux distribution for mobile devices. Nokia’s main aim is the smartphone market, while Intel started out thinking about MIDs and netbooks.

The move on Nokia’s part is quite unexpected, since they just recently reiterated that Symbian would always be at the core of their smartphone offering. Symbian is great, don’t get me wrong, but I wouldn’t want to develop for it. It’s getting old in the tooth and requires a lot of work to get used to.

Now, what does that mean in the mobile landscape? Why would Nokia move in this direction? What does that all mean in context?

Apple’s iOS has revealed a truth that developers already knew (but not many other people): an operating system, no matter for what class of device, lives with the applications you can develop for it. Apple’s iPhone is not great as a phone – as a matter, it is quite a frustrating device that got to feature parity with other smartphone only with the current, fourth generation. The iPhone is great because of all the nifty things you can do with it. It’s a smartphone in that it can do a lot of things that you hadn’t even dreamed of doing on a phone.

There is a downside. While Apple was the first company to push app development really hard, it did so in a typically Apple-control-freak way. Applications have to be developed using an approved tool chain, using a language (Objective C) that only Apple uses anymore, and they have to undergo rigorous testing and a capricious approval process. That sucks, because Apple doesn’t guarantee anything and you risk writing an app for months, only to see it languish in approval.

Enters Android, Google’s response to iOS. A custom Linux distribution for smartphones. Android addresses many of the issues of iOS, has much better features, and is available for anyone, for free. Sounds great!

Android has a few issues: the tool chain is fixed again, but for other reasons. Instead of developing in Objective C, you are forced to use Java. The choice makes sense in a way: after all Java allows for cross-platform development and was designed from the get-go to be run on embedded devices. What’s wrong with that? Really, nothing wrong.

MeeGo comes from a different angle: instead of forcing you to use Java and approved SDKs, MeeGo is a Linux distribution. A full distribution, including package manager, GNU tool chain, standard Linux kernel, etc. You can use Java on it, you can use C, you can use Python, you can use Perl. You can use whatever is available on Linux, minus device specifics.

How does that matter? Why not simply write for the available SDK? Well, you are right, it doesn’t really matter, per se. Maemo itself, for instance, pushed strongly for the use of Python by making all features on the devices accessible using that language. If you developed for the N tablets (770, 800, 810, 900), you probably used Python.

But! But you can use whatever is available on Linux. That means that if there is a piece of software that you need, you simply recompile it for the processor you want and there you go, it runs just fine.

That’s actually a lot more important than you’d think, in two very subtle ways:

1. Source must be available. Since you do not know what particular processor runs on a MeeGo device, you have to provide sources that can be recompiled on any device, or you have to specify what devices your software runs on. This is pretty good, because it means you either hand out an application that is targeted at a specific list of phones (closed), or you give people the ability to make it run on their specific device the way they want it.

2. Backend code running on the desktop or servers can simply be recompiled, guaranteeing compatibility. For instance, for an applet of mine I needed access to tcpdump. I simply took the open source code, recompiled it on Maemo/ARM, and I knew it would run the same. This kind of thing is particularly important when dealing with storage and network connectivity code, when version incompatibility can force hours and days of debugging. For instance, the N900 mostly standardized on SQLite for storage, which means I can attach my N900 to the desktop and manipulate the phone’s storage right from my computer.

So far, the main issue with Maemo development and the N900 was lack of resources on Nokia’s part. The idea was great, but execution was lagging. Just too many things to do. For instance, the phone is a resource hog of first caliber when you install apps, and the battery can easily die within a couple of hours if you are not good about app control. Android has a battery watch applet – extremely important. Maemo doesn’t.

The N900 doesn’t allow you to attach an external keyboard. Nokia says that phone functionality is their current focus (emphasizing resource constraints), but an Internet tablet you can’t program on is quite a weird thing. To add insult to injury, you cannot attach any USB device because the N900 has no USB host connection. That’s incredibly stupid, because it nixes one of the major advantages of having a full Linux distribution on a phone. Imagine what would happen if you could connect your phone to a printer, a scanner, a webcam, a barcode scanner, etc.

Of course, these limitations are nothing compared to the fundamental bugginess of Maemo, again rooted in resource constraints. The phone does weird things – after a day or too, it’s so sluggish it needs to be rebooted. At times, sound has an echo as if two copies of the sound server were running in parallel. The phone reboots itself for no apparent reason, even while you are doing absolutely nothing with it or, much worse, when you are in the middle of a phone call.

But! That’s all an issue of resources, and Nokia has them. I am very happy about the merger between Maemo and Moblin, because it wrests control of development from the single company Nokia. At the same time, it should add direly needed resources for polish and bug fixing.

If you ask me, MeeGo is the way to go. Fastest application development time by far, rich user interface, deep hardware support. Now it’s just a matter of marketing and engineering muscle, and I hope that Nokia realizes that.

[As a side note: I just realized a few days ago that KDE4 has been out for several years, and that I am still reminiscing of the powerhouse that Amarok once was. Same resource constraints. I so, so wished they had left Amarok alone, or that someone would bring the 1.x series back. Amarok 2.x still sucks, and there is no improvement in sight.]

Read more: http://cinserely.blogspot.com/2010/06/meego-to-become-default-os-for-nokia.html

Kindle and Calibre

I’ve been using Calibre on and off to get content to my Kindle, and I have to admit the software is gaining a lot in functionality as it matures. The version label (currently 0.6.42) doesn’t do functionality, stability, and ease of use justice at all, and I highly recommend it to anyone with a Kindle, regardless of use.

What do I do with it? I mainly add all those files I couldn’t otherwise read on Kindle. Calibre allows me to easily do the following:

  • Display web pages
  • Get RSS feeds
  • Convert text-based PDF files
  • Show ePub files
  • Access Project Gutenberg ebooks

The flow I have to follow for each task is slightly different, but all in all they are fairly easy to get used to and in some cases downright simple.

To get a web page onto your Kindle, for instance, you just save the page as HTML and convert it by simply adding it as a new ebook to Calibre. You automatically get the best feature of Calibre: once you tell it what the target device is (Kindle, in this case), it will automatically convert to a format (.mobi) that works on that device, and it will format the ebook for he device properties.

RSS feeds are solved differently (and justly so) in that they are scheduled in a separate module. You can add your own RSS feeds, and there are some pre-defined ones. The adding process is not as straightforward as it could be, mostly because RSS feeds are so different – some come with full articles, others with a brief preview, others still with just a link. Once you choose the feeds your are interested in, you decide how often you want them updated, and Calibre will download, convert, and push as often as you’d like.

For PDF and ePub files, the process is the same as for web pages. The main difference here is that some PDF and ePub files are protected. Calibre doesn’t know how to deal with that (which is quite to be expected). Otherwise, it does a pretty good job, handling only headers and footers poorly.

Gutenberg is finally smarting up to Kindle and is starting to publish some of its ebooks in .mobi format. That means you can download them straight to your computer and either add them using Calibre of simply adding them to your Kindle documents folder. Really dead simple.

I would be perfectly happy with Calibre if it added just two features:

  1. Sharing of “recipes.” Things like RSS feeds and header/footer detection need advanced settings that are recipes of some form (XPath for header/footer, for instance). It makes very little sense to write a recipe if you are the only one that uses it, and there is no easy way to share them. I know it’s a pain to set up a repository for automatic sharing, but it would make like so much easier. The current recipe for PDF header/footer, for instance, detects the headers and footers that web browsers put on printed pages. Admittedly a good default – but what am I going to do with the PDF of a book, with the standard chapter heading and page number as h/f?
  2. Direct download of URLs. It annoys me that I have to download a file from the web – be it HTML or .mobi – to add it to my list of books. I guess direct integration with ebook providers (for free or for pay) would be really welcome, as would an option to add a book from URL.

I’ll keep you posted. You should certainly try out the program, though, if you like your Kindle for more than reading Amazon offerings.

Read more: http://cinserely.blogspot.com/2010/06/kindle-and-calibre.html

Healthy Living – Savvy Supermarket Buys

One of the things I  like doing when visiting a new country or culture: go to the nearest supermarket and see what people buy for food. You learn the most amazing things when you do that. For instance:

Italian supermarkets are full of pasta ingredients. There is typically one full aisle for the pasta itself, then another aisle for tomato products, an aisle full of olive oils. The meats and produce are out of this world, even in your typical grocery store, and they seem never to sell generic produce, only seasonal types.

German supermarkets are full of sweets, breads, and desserts. You’d think the only thing Germans eat are carbohydrates – and empty ones at that. Around every major holiday, then, the desserts and candy double, overtaking pretty much the whole place. Around Christmas time, then, you’d think the whole nation has entered a rat race for the most sugar eaten.

French supermarkets are strangely full of all foods we typically associate with France. Rows of cheeses, myriads of wines, savoir vivre everywhere.

When you come to America, then, you get a very odd picture of what people eat. There are rows and rows of cereals, and rows and rows of frozen goods. The produce section is made up entirely of generic, out-of-season fruits and vegetables, and unhealthy snacks and sodas are everywhere.

(more…)

Quick Greasemonkey Script for Kelley Blue Book

One of the most frustrating user interfaces on the planet is that of Kelley Blue Book. It just seems to be designed around the idea of making you click as many times as possible to get to the information you need, instead of around the notion of quick access. Is that because KBB wants to maximize ad impressions?

Well, it really doesn’t matter. There is only a certain amount of clicks I will endure until I start typing URLs manually into a browser, and KBB got me there. In frustration.

How does the navigation work? Well, it’s essentially tree navigation. You choose first whether you want trade-in or retail values. Then you select the model year, then the manufacturer, and finally the model (and later on, options). That’s great if you want to look up the values for a particular car, as for instance if you intend to sell yours.

Now, imagine you are looking to buy a particular type of car – say an AWD sedan. You settled on either an Audi A4 or A6, or a Subaru (any of many models). Your budget is constrained (yeah, well) but you are less interested in the car’s age. I would think that’s the more “normal” case.

So, now you are armed with a list of cars (from Craigslist, cars.com, etc.) and you need to look up their value. Say you found two Forresters, one from 2004, one from 2005. You found an A4 from 2008 and an A6 from 2003. You got an Outback from 2001. They are all nominally in your price range.

So, off you go to KBB. Logically, you’d want to focus on the two manufacturers or six models you are considering, but in KBB, you have to first hand in the year. After you chose the year, you can drill down to the car you are considering. When you are looking up the next Forrester, no help: you have to go all the way to the top (the year) and drill down.

Of course, if you look at the URL, you notice that it specifies year, then manufacturer, then model. If you just change the year in the URL, then you find what you want. Huh? You don’t have to go all the way to the top?

Now, why are they navigating this way? It makes no sense from a user’s perspective: when I say I want to look up the value of a 2006 Subaru Outback, the next logical thing to look up is a 2005 Subaru Outback, not a 2006 Lamborghini Countach. But the deal is that the navigation is organized around availability: the year constrains the possible manufacturers (not much), and then the models available.

That means that the programmer found out that each model is available only in certain years, and hence built the navigation around that fact.

The rest of the world thinks that’s stupid.

Hence, the need for a very simple Greasemonkey script that adds the missing navigation. You get to the page you requested (e.g. for the 2006 Subaru Outback) , and links to the year, manufacturer, and model (and trim, where available) are augmented by drop down boxes that allow you to choose alternatives. You can click on the year drop down and select a different year for the model listed. Or you could click on the model and see an alternative – all other things equal.

Let me know if you are interested in the script. It’s not packaged for public consumption, yet, but if you want it, you can gave it!

Read more: http://cinserely.blogspot.com/2010/06/quick-greasemonkey-script-for-kelley.html

Compact Flash – Long Obsolete?

A few years back, I thought I’d revive my days of taking pictures and bought myself a digital SLR. Back then, they were not all the rage, mostly because they were so incredibly expensive and the pictures they shot were not much better than the point-and-shooters’. Seriously, if it hadn’t been that I was already used to SLRs and that I knew how powerful the lenses are, I would have done without.

I bought a Canon EOS Digital Rebel XT. I am not going to bore you with the specs, since every camera you get for free with a two-pack of ink cartridges is better these days, but it shot passable pictures (not particularly good ones, though) and I thought getting the tele lens out for the AIDS ride might be a good idea. You know, shoot the incoming finishers when they are not two inches away, that kind of thing.

Well, turns out the camera had a stinky 2G card. Back in the day, I am sure, it cost a fortune. I may have chosen it because it was the best bargain, but I could have probably bought two ink cartridges for the price of that one… (Sorry if you didn’t follow the sarcasm.) I thought, well, you can get 8G cards for $20, I’ll just head out to my fave store, The Shack (or whatever they call themselves today), and get myself a Terabyte of them.

I got there and looked at “options.” There was one card available, a 4G card for $30. It seemed like a seriously bad deal. I actually half-way thought CF cards had gone out of style and that a SD to CF adapter might be the way to go. They didn’t have any of that.

Instead, I decided to go to my least fave store, Best Buy. They opened a new store just across the street, and between the Shack a few blocks East and this new Best Buy, I am not sure how long that little store will survive. In any case, I get into the BB and manage to dodge the first wave of drones that come to “help” me (i.e. upsell me to whatever they need to throw out today).

When I stupidly think that the memory cards are going to be with the portable hard drives, a drone attacks. He’s actually kinda nice and not as salesly as I am used to from the store on Harrison in San Francisco. He doesn’t know of CF cards, though, hasn’t seen them. He is certain, though, they don’t carry an adapter. He runs back to the computer and just in time for me to ponder whether I should leave, he comes back with news: they are in store. The cards, not the adapter.

We walk to the other wall of the digital camera section and see the CF cards. There are four types: outrageously expensive 4GB, outrageously expensive 8GB, and outrageously expensive 16GB. I mean, the prices were laughable – the 16GB ran over $250! For that price, I can easily buy a 2TB portable drive!

Of course, they were supposed to be ultra-fast. But my camera is ultra-slow, so it doesn’t really matter. I ended up picking the 4GB model, praying that everything will work out fine. Then I walked to the digital SLR section and looked. All of them switched to SD now.

How quickly whole standards obsolete!

Read more: http://cinserely.blogspot.com/2010/06/compact-flash-long-obsolete.html

My Ideal Phone

I am OK with my N900, but not really happy. The software is still buggy, the thing is too heavy, and the slider keyboard is useless. I do like certain things about it, tough, and that got me thinking: what would a perfect phone look like?

Form Factor

I find that the bigger the screen, the more I like a phone. Actually, it’s mostly the resolution that I like – I couldn’t live with the crappy resolution of the 3G iPhone, and the resolution of current Android devices (and of the N900) is OK. Even better, though, would be a phone with a screen that folds in the middle. A little like the current crop of eReaders, only that those tend to have two different screens (one eInk, the other LCD-type). You either flip the phone open (screens protected inside) or the screens are on the outside (one turned off until you flip the phone flat). That way you get twice the screen real estate at half the carrying size. Gotta love that!

Battery Life

Clearly the worst gripe about smart phones is their outlandishly crappy battery life. The N900 fits with most and doesn’t quite manage a full work day without recharge, but that’s absolutely unacceptable on a device you use for mobile communication. Clearly, we need phones that can last a full 24 hours at average use, which includes constant texting and at least two hours talk time.

Or….

At the very least we need an easier way to charge our phones. I am not talking wireless charging, necessarily, but at the very very least an end to the endless number of plugs and cables. Stop the madness, standardize on one single power input, and make it possible for us to have power outlets into which you can directly plug a phone.

Software

The current crop of phone operating systems isn’t quite there, yet. The problems vary, but it’s mostly a combination of three factors:

  1. the OS vendors tries a lock-in strategy
  2. the relationship between phone and desktop OS is not clearly outlined
  3. the limitations of the tool chains make it hard to develop

iPhone suffers from all three. Windows Mobile is too much desktop, not enough phone. Android chose to go the Java route, which is good for portability but bad for developers, who have to buy into the paradigm.

The N900, on the other hand, would have the winning combination. You can simply take software written for the desktop and recompile it for the phone – make your changes to the user interface, and you are done. That gives you enormous leverage: you can write the backend code once and you know it will work (as is the case with the standard DB format on it, SQLite), you can reuse text mode utilities if you just recompile them, you can stick with whatever expertise you have already.

The problem with the N900 is one of implementation. It’s the winning idea, but it isn’t well done. For instance, it’s a serious bitch to get the development environment going – you have to wade through a ton of web pages that are partially outdated, and even the virtual image offered is not kept current. Things that should be easy (e.g., getting a current re-build of a text mode app for Maemo) are hard. Why not have a current list of Linux source packages in Debian and build them on Maemo by default?

I think Maemo has the same problem that KDE ran into as of lately: great idea, but not enough people working on it, and those that are are too ambitious about their own agenda.

Connectivity

People of the world, unite! Demand that your smart phone be what you expect it to be: your computer when you don’t want to carry a computer. How come so few smart phones have USB hosts? I want to be able to plug a printer, a scanner, a card reader into my phone. I want to be able to attach my phone to a TV via HDMI. I want my phone to have a standard interface for extensions, so that I can easily plug in a GPS module, or a better camera, or a credit card scanner, or a heart rate monitor.

Manufacturers are making a huge mistake by ignoring connectivity options. Of course, I understand where that comes from: any new connector means thousands of support calls, every open connectivity channel means compatibility issues left and right. Whoever starts the movement is going to have to deal with a host of issues.

But you know what: it’s absolutely worth it. Remember when USB came out? It just didn’t work. You needed drivers for everything, half the USB devices had drivers so poorly written, they’d crash your computer, and their were mutually incompatible half the time. It was a nightmare But now, is there anything easier than USB plug & play?

That’s probably the reason the first connectivity option I mentioned is USB. Just add USB host, and we’ll figure out a way to get modules for our phone that can connect, and software that will work with it.

End App-centricity

One of my worst gripes about the N900 is that the software isn’t standardized. There are some apps that use Berkeley DB, most others use SQLite, and a third set uses incompatible formats. Some things are done well – for instance, the Conversations widget handles both SMS and IM, and the Phone widget does both Skype and GSM well. But there is no overriding architecture for that.

The idea should be that all data are stored in a central repository, and the apps should have access to it. Apps shouldn’t even be allowed to store anything in a format that is not the central repository’s, especially if it is user-dependent and fast-changing (like messages of any kind).

Examples? The email app stinks big time. That’s mostly because it has no unified inbox, so that every GMail account I have requires a bunch of clicks to get to. It also does late binding, which means it will tell you there is a new message, but when you want to see it, you have to first load the current server view. Most importantly, though, you cannot combine things that have the same frequency as email, like RSS feeds, into your email view.

The idea is the life stream – a single point of entry that combines all the data you want into a single view. There you can filter the types as you wish (if you wish). Things are arranged chronologically and updated automatically. They scroll on their own. There are sources of data. There is a view. You can interact with the data the way you see fit.

But don’t force me to switch apps just because I want to see more data. Even in a phone that does multitasking well, it’s a pain.

Read more: http://cinserely.blogspot.com/2010/06/my-ideal-phone.html

MMS on Nokia N900/T-Mobile

OK, this is the latest on N900 and T-Mobile MMS. It works, but you have to figure out a few things. To save you the time and aggravation, here is my recipe:

  1. Download and install (via Application Manager) fMMS.
  2. In fMMS, go to Settings -> Internet Connection Settings
  3. Get the current settings from this page (they worked for me)
  4. You are ready to go

The settings (in the format used by fMMS), in case you can’t reach the page or whatever, are summarized below:

  • Access point name: wap.voicestream.com
  • MMSC: http://mms.msg.eng.t-mobile.com/mms/wapenc
  • User name: 
  • Password:
  • HTTP proxy: 216.155.165.50

User name and password are blank. Let me know if it doesn’t work for you!

[Oh, and by the way – the settings will work only if you are on the T-Mobile Internet connection. WiFi won’t work.]

Read more: http://cinserely.blogspot.com/2010/06/mms-on-nokia-n900t-mobile.html

Mashups, Lady GaGa, and the Rebirth of Pop

A slightly unusual post today. I’ll talk for a change about music, even if with a firm rooting in technology.

As someone who lived through the 80s, I can say that whatever you may think of politics, economics, fashion, or architecture, you’ll have to admit that music in that decade was exciting. Sure, the 60s had the Beatles, the Rolling Stones, etc. – but the 80s had by far the best pop music. In addition, they started an explosion of musical movements and genres (rap, punk, hip hop, etc.) that came to fruition (and popularity) a lot later.

There were, as of my reckoning, two main reasons for this explosion. For one, the CD came on the market and with it a completely new ball game for music distribution. CDs were incredibly cheap to mass-produce, incredibly accurate in their rendition, and the players were obviously equivalent. Everybody could listen to high quality music from a cheap player, and CDs could be sold at extremely low cost.

The other reason is the advent of PC technology. Married with the MIDI interface for musical instruments, it put music in everyone’s reach. A two man band could perform the same music that until then required a whole band, and professional recording was increasingly affordable. The democratization in music had begun, and the fruits were clearly visible.

The 90s brought a double shock. For one, there was enormous consolidation in the music industry, with smaller labels being gobbled up by larger ones, and these in turn by even larger ones. The end result was a choking off of the creativity and inventiveness of small labels in favor of big names. Even those big names, though, were choked off, as the story of Prince shows quite clearly.

The second component, in a wonderful symmetry to the perfect duo of the 80s, was technological. In the 90s, technology to copy CDs became more and more widely available, and the long decline in sales of music began.

The new Millennium brought an intensification of the trend. For one, there was the Internet: the perfect medium for the transmission of digital files. Napster started something that became clearly unstoppable, while the music labels continued their consolidation and rejection of a search for quality over the predictable.

You see, music has had a trend to simplification. Pop music in particular had started a long progression towards less and less content in music. You had an idea – a new melody, beat, or harmony – and you’d extend it for four minutes. That would be fine and in the trend of he time – the mixing shops would take your song and extend it from four to eight or even twelve minutes.

Going dancing soon became an exercise in boredom, at least as far as the music was concerned. You’d listen to something that would go on forever, pulsating its way into your brain until you zoned out. Music became the background, not the reason for dancing any more.

In doing so, music bucked a trend towards increasing complexity and information density that the Internet has brought on. Consider the new trend in editing: take a scene with low information content (such as a person moving on a straight trajectory) and increase the speed in editing, acting as if the viewer had used the fast forward button. That’s the trend: make information more compact, give me a constant stream of information, don’t bore me. We have become very efficient information processors, and the Internet has become the main source for our constant desire for information – and fuels increased ability to process information.

Music couldn’t skip the trend forever. The information density in music had to increase again, making music richer than it was any time in the past. We wouldn’t accept a musical idea spread over four minutes for long, things had to change.

The first ones to do something about it were DJs. They took matters in their own hand and decided to do something innovative: combine different songs into one. For it to work, the songs had to have similar beats and compatible chord progressions – and lo and behold, the lack of imagination in music made that all possible!

It so came to pass that mashups became all the rage. You can hear all sorts of mashups, for instance by typing “mashup” in the YouTube search bar. Some are combinations of current pop chart songs, some combine those with rock and roll classics. An astonishing amount of them mixes Eurhythmics’ Sweet Dreams (are Made of This) with whatever flavor of the day is available.

A second degree of mashup is the multi-mash. DJ Earworm is an outstanding example of this kind of work – since 2007 he has produced a mashup of the top 25 most popular songs of the year (amongst other work). These mashups are staggering in quality and complexity and succeed in creating a new genre in music, one where other people’s songs are the instruments used to create a new song. (Oddly, that’s basically the idea that Edgar Varese had 50 years ago. His music, though, sounds really alien and frightening.)

Enters Lady GaGa. Those who know here point to her whole persona as the reason for her success: the outfits, the political opinions, the collaboration with other artists. That’s all very true, but without innovative music, Lady GaGa would not succeed.

Listening to her most popular songs, one notices that they are more complex than typical pop songs. Instead of the simple aba or ababa structure, in which chorus and narrative alternate, her songs have three, four, five key musical concepts that are followed around and alternated. Bad Romance, the crazed sinfonietta whose video became the most watched in YouTube history, is particularly rich with unrelated themes. Lady GaGa is also perfectly able to have those themes play with each other, adding a mashup element to the dramatic complexity of her music.

You may like the Lady or not, but you’ll find out that soon everybody is going to write songs like hers – or even more complex. And we can get back to the musical explosion of the 80s.

Read more: http://cinserely.blogspot.com/2010/06/mashups-lady-gaga-and-rebirth-of-pop.html

Security Matters

When I was little, I recall watching this popular science program in which Peter Ustinov popularized the theory of relativity. There was a rocket, a man on the rocket, a launch of the man and the rocket, and a transmission from the really fast man on the really fast rocket. The man on the really fast rocket saw earthlings slow down. Conversely, the earthlings saw the man talk really fast. Makes sense?

No, it doesn’t. A cursory understanding of relativity tells you that the other’s time must slow down or accelerate for both observers, not just for one. So both the man on the rocket and the earth station would have observed an apparent slowdown in the other’s time. Of course, that seems confusing, since once the man on the really fast rocket returns to earth, time will have passed much faster on earth than for the man – but the reason for it is not speed, but acceleration.

This intro serves to explain something that has been bothering me for a while: the way people misunderstand information security concepts and continually use the wrong thing for the right purpose. It’s really not hard, since there are only a very few and very distinct concepts – yet people get them wrong all the time. It’s a little as if people take “security” as a one-size fits all umbrella, and doing something secure means doing everything under the umbrella.

I was reading this Slashdot article this morning. Apparently, people were using TOR routing to send confidential information back and forth, not realizing that TOR anonymizes connections (that is, correlation between information source and destination) but not content. Anyone with access to a TOR node can snoop all the data passing through it, and if the data is not protected, it’s fair game.

So, what are the fundamental things you can do in security? Here is a partial list:

  • Protect content from eavesdropping (encryption in transit)
  • Protect content as stored (encryption at rest)
  • Ensure the content received is the content you sent (signing and private key encryption)
  • Ensure only the intended recipient can read the content you sent (public key encryption)
  • Ensure nobody knows sender and recipient are talking to each other (anonymization)
  • Ensure the content you received is really from the sender you expect (PKI, certificates)
  • Ensure the person connecting with you is who they say they are (login)
  • Ensure the connection made is from a person you have logged in (authentication)
  • Ensure the person who is requesting an action is allowed to perform it (authorization)

At first, all of this may seem to be one and the same thing, but it really isn’t. If you are trying to accomplish one of the tasks on the list by performing another solution (for instance, anonymizing a connection and expecting the content to be protected) you gain nothing and most likely make things worse than if you had done nothing at all.

Protecting Content – Encryption

When you don’t want other people to see the content your are sending or receiving, you encrypt it. Encryption comes in many forms, but the most important distinction to us is whether you want to encrypt something permanently or only during the transmission.

Here you have to understand that “transmission” is a technical term and means “exchange of messages between two end points.” An email, for instance, is not a “transmission,” because it is handled by as many intermediate “end points” as necessary. Hence it behaves to the technical user as something that needs to be treated as in need of permanent encryption.

Now, how do you encrypt your message? You have a lot of options – some better, some worse. In general you want to think of encryption like a lock box into which you put the message. The better the box, the safer the message. In addition, the lock is also really important, as if it is easy to replicate or fake the key, then your safety is gone.

The nerd distinguishes between two types of encryption: symmetric, in which the same key is used to lock and unlock the box, and asymmetric, in which different keys do the same trick. In general, symmetric encryption is easier to handle, but asymmetric encryption much more powerful.

Think of the standard you use as the lock box itself, and the particular password as the key to the box. Sometimes the password is called a passphrase or key, or more in general the secret. It really has the same function as the key in the box: it makes sure that only the person that owns this particular box can open it.

Typically, you do not have to consciously choose a standard. The software you use to encrypt data will typically choose an appropriate standard, and if you keep it up to date it will also change standard as security improved.

Asymmetric Encryption – Private and Public Keys

With symmetric encryption, both end points use the same key to encrypt and decrypt data. For instance, you would use a password on a ZIP file, and the recipient uses the same password. In asymmetric encryption, though, the encryption occurs with one password, the decryption with another. How is this possible? Well, the two passwords are not chosen randomly. Instead, they match precisely: the one used to encrypt is a 100% match of the other, and such passwords are always generated in pairs.

Just like in a Sudoku puzzle, where you have plenty leeway in placing the numbers, but find ultimately strict constraints, in asymmetric encryption you cannot choose just any random pair of passwords. You actually cannot even choose one of the two. Instead, the two passwords are generated for you, and they are made up of gobbledygook that only security software is really happy with.

The two passwords, or keys, have slightly different functions. One of the two can be used to generate the second, but the second cannot be used to generate the first. Because of this, the first one is more valuable and must be kept safe at all times. The other one, on the other hand, is not independently important of the first and you can handle it in a much less strict way. That’s how they got their respective names: the first one is called private key, the other one public key.

Private keys are so important that you usually encrypt them with symmetric encryption to ensure nobody can use them even if they get to them – at least not use them quickly. So, when you generate a key pair, the software will ask you to assign a passphrase to the private key (not to the public key).

While the asymmetry is fundamental, there is one thing in which the key pair is symmetric: what is encrypted with either key can only be decrypted with the matching other. Since only you have the private key, that makes for all sorts of interesting applications.

1. Sending a Message Only You Can Read

You are the only one with the private key. Anyone that encrypts a message with your public key ensures that nobody can read it but you. Not even they can read it once it’s encrypted!

2. Sending a Message That’s Certainly Yours

When you send a message encrypted with your private key, anyone with the public key can decrypt it. But since they have to use your public key, they know the message came from you, since only you can encrypt with the private key.

Signing

As we just saw, when you send a message that is encrypted with your private key, you essentially state the message came from you. What about if you want anyone to read the message, but want to ensure they know it’s really from you?

Decades of unreliable connections have left us with the concept of a checksum. That’s a number that is computed on the content of a message/file and is a summary or digest of the message itself. The message can be any length, the digest is typically only a few bytes – it’s only function is to tell you whether the message was received accurately.

To give you a rough idea of how that works, imagine that you take the value of each letter of the alphabet and add them all up. You tack on the sum at the end of a message, and that’s your digest. A = 1, B = 2, etc. Then the digest of this paragraph is 7997.

Now, imagine you create a digest of your message, but you encrypt it with your private key before tacking it onto the message. Suddenly, only you can have created the message, and anyone with your public key will be able to decrypt it. This way, they will know that the message is yours and that it was received as sent. That’s quite brilliant, because it allows you to send something that you don’t mean to be hidden from view,  just making sure people that care have a good way of verifying it’s really from you.

Certificates and Chain of Trust

Now, imagine you wanted to send something to someone who doesn’t have your public key. You want it to be either encrypted or signed, in any case you want them to know it’s from you and only from you. Well, to do that, you would have to send them your public key, no? Easy!

But wait? If you send them your public key, how do they know it’s your public key? Imagine a rogue government that listens to message exchanges and inserts its own public key whenever a different one is detected. Once it does that, it controls all encrypted traffic. And it doesn’t even have to have a government – imagine the WiFi network at your coffee shop!

Fortunately, there is a solution to that, in the form of certificates. The idea here is that there is someone you trust in the world, because they are known to be trustworthy and because they have a process in place that ensures their word is worth your trust. You get their public key and whenever they send you a message encrypted with the corresponding key, you know it’s them.

Imagine now they sent you a message saying something like, “Yes, I know XYZ, and I if they tell you abc is their public key, then that’s right.” Why, then you could trust the public key “abc”! Note that you have to trust a lot here: you have to trust the verifier in both keeping their list of good public keys safe, and in not snooping on the messages you send using “abc”. After all, they told you “abc” is good!

In practice, that’s done all the time without your knowing it. Browsers connect to secure sites (the ones whose URL starts with https:// instead of http://) by doing just that: the site shows a certificate that has some basic information, including someone trustworthy that can verify the information on the certificate. That someone is trusted by someone else, who is trusted by someone else again, who is trusted by someone you trust. In the end, you trust that first certificate because of all the other people’s/sites’ trust.

When our browser connects to a secure URL, e.g. https://mail.google.com, it immediately demands a certificate. Once it verifies all the data and the certificate chain, then it talks to the server and trusts it. The browsers usually display that in a very non-obvious way, for instance by showing a little lock at the bottom of the screen. If you click on the lock, you’ll probably get the certificate information on screen. When you look at it, you’ll notice that the fields in there all make sense now.

Anonymization

Sometimes it’s just as important to know that two parties talked as it is to know what they said. You probably remember how big a deal it was when rumors surfaced that an Al Qaeda operative had been in secret talks with the Iraqi government during the build-up of the War in Iraq. It didn’t really matter what they had said: it was the fact itself they had talks that was
 
Frankly, on the Internet the problem is less one of spies and terrorists and more one of not having certain parties know you are using certain services. Maybe you don’t want your Internet provider to know you are surfing for porn, or maybe you don’t want your government to know that you are reading up on a massacre it perpetrated.

Anonymization provides that level of protection. It takes a message and bounces it around so that neither the end point nor anyone untrusted in the middle know where it came from.

Today, there are two main forms of anonymization available: trusted proxies and TOR. The former is mostly used to bypass restricted networks, the latter… oh, well, look it up yourselves, I am not going to get into a controversy here.

Basically, in both cases the idea is to connect through an encrypted tunnel. While your requests and the responses may be in the clear, the tunnel through which they flow may be encrypted.

Let’s consider the easier case of a trusted proxy, since it is conceptually the same as in the other case. Assume you are in a country that doesn’t allow you, I don’t know, to search on Google. It does, though, allow you to connect to secure sites. Now, since the connection to secure sites cannot be monitored, what if there was a secure site that just goes out and searches for you on Google?

Well, that’s what a secure proxy does: you connect to it, and it connects for you to the site you really want to go to. When the response comes back, the site sends it back as its very own response, but through the secure channel that only it and you can penetrate.

The downside to this is that the proxy will see everything you are sending and receiving, including passwords and content – whatever you didn’t want others to see. That’s why you need to establish that the proxy is trusted: if it betrays you, it will know everything there is to know about you.

Lastly: Login, Authentication, Authorization

One of the questions that comes up regularly is, “Why is it that the server has to prove who it is with a certificate, and the user doesn’t?”


Indeed, the web would be a much better place if client certificates were required. Unfortunately, as the whole security thing came up, trusted certificates were expensive and really hard to come by, so there was no way to ensure everybody used one. Still more damningly, it is hard to setup a server that authenticates using certificates, so even if you had one, it would be of no use.

Instead, the web moved to a model in which you first prove who you are by providing credentials. You essentially come to a web server as an unknown entity, but you provide a set of required data items and the web server accepts you. Usually, that’s a user name and password combination, but some sites request more (like a rotating security question).

The point of this login phase is to establish that you are who you say you are and to provide you with “something” that tells the servers on the next try that it’s you. This is to avoid having to send user name and password back and forth all the time.

When you connect to the server again, for instance because you clicked on a form, you will present this “something” (usually a combination of encrypted cookie and encrypted form field) and the server makes sure that those credentials are valid. Notice that we replaced something you provided (user name and password) with something the server provided. Since the server owns the new credentials, it can do with them as it pleases, including declaring them invalid at any point.

Every web service must include this verification step at any request that connects to user data. Failure to do so can cause the worst security issues, which gives this step its fundamental importance in web security. It’s called authentication.

Once you are authenticated, the server still has to decide whether you can do something. Some users may be allowed to do things that others are not allowed to, for instance perform administrative tasks. Sometimes that’s handled by creating separate applications for different tasks, and have separate user accounts on each service. But more and more frequently, applications are merged for ease of development and access control lists are used. This final step is called authorization.

Summary

At this point, you should understand the fundamental difference between terms used in security and should be able to make informed choices on a variety of options. Please, comment on omissions and requests for clarification.

Read more: http://palladio-project.blogspot.com/2010/06/security-matters.html