June 2007


I happened to be working in Canary Wharf on Tuesday, so on the way home I made a last minute decision to detour to the mashup* event which was themed ‘On Location’.  I’d made a mental note of it a while ago, but then completely forgotten, anyway I’m glad I  went as it was pretty interesting.  There was a good talk from TeleAtlas, about what they are up to, particularly in the Internet mapping and wireless mapping space.  I was most interested in the information they presented about the commercial penetration of the different map sites around the world.  Google was the top one, but the interesting bit was that it seemed that in most places Google was first and a local provider (such as multimap in the UK) was second.  I’m not sure why this would be, maybe a language thing, or historical or perhaps its a map presentation cartography issue, I know my wife always uses multimap when looking for addresses, as she says they look like proper maps!  They also mentioned that Nokia is planning on launching a new global (tile based) mapping platform and API for use with their devices.

The talk was followed by a panel discussion, which focused mostly on the privacy and security issues surrounding things like google streetview and also locational information, kept for example by your phone service provider.  Do you care that your phone company, or your employer or your spouse can locate you anytime? or do the benefits of you and trusted entities knowing your location outweigh the issues?

One thing the talks and the interesting applications that many of the presenters and demonstrators were discussing, made me think about is to what extent does the ubiquity of high quality reference data available in web mapping sites and API’s lead to a downward spiral in the perceived value of data.  I guess this is a question that TeleAtlas and other data providers probably worry about too. 

Most of the demonstrators were showing property search based web sites, which seems to be where the money\talk is at the moment, but being a geek at heart I was more interested in Widr and their location API, Its a similar concept to Navizon, location by wireless networks.  Its pretty cool, and the model of revenue sharing with people who contribute to collect the location data is nice.  I guess the key is how  much coverage they get, this will partly depend on how much data is collected by contributors, and the problem I found originally with Navizon when it was just a mobile application, is that to collect data you need a GPS, but if you have a GPS you (mostly) don’t need Navizon, Having a web site API opens it up a bit as while you may have a GPS but your desk or laptop bound website users may not, but how important location is to laptop users vs mobile users I’m not so sure, its still  a cool idea though.

***Update*** 

I should probably point out at this point that the tool currently linked from here does not work with the  current xml format of openstreetmap.  I will post a new script soon that does, although you can probably tell from how long ago the format changed that I have had a few othe real-life things get in the way of blogging recently.

***End Update*****

A while back I started getting interested in the OpenStreetMap Project, initially just filling some detail around the area that I happen to live in.  Access to free detailed data has always been a bit of an issue in the UK and this looks like a great project that seems to be getting a lot of traction.  One thing to note is that the project has been approached very differently from a typical GIS data project. 

Ease of use of data capture has been the main goal and rightly so, as capturing the data in the first place is obviously the biggest challenge. However this does mean that if you approach the data expecting something similar to say navtech or O.S. Data you are in for a bit of a shock.  There is a documented XML structure, but it lets you put pretty much whatever content you want into the data.  This leads to a wonderfully rich set of attribution but also poses some challenges to using the data from a software point of view.  There are some guidelines to attribute feature coding here and there has been lots of heated debates on the discussion list about whether a new data model should be adopted , or whether a topological model is better, or if a spatial index is needed on ways and segments.  If you are used to commercial data the main thing you will notice is that there is very little consistent metadata at the feature level.  That said I’m sure the data has plenty of uses above and beyond cartographic display.

Most of the current tools are aimed at data capture tasks, or at processing the data into nicely rendered map tiles for use in the Google style slippymap.  There is now a pretty large set of data to work with, and it would be great for people to start putting the data to other interesting uses, so to help with that process I’ve just released a copy of a python script I wrote last year (As an exercise in learning python)  for loading the data into a file Geodatabase for use in ArcGIS.  I was planning on re-writing it, but never seem to have the time or enthusiasm, so I’ve released it as is.

The OpenStreetMap data can be downloaded as a Planet dump file containing the current set of data.  The data consists of nodes (both points and vertices) segments (a linear segment between two vertices) and ways (a linear or possible area feature made up of an ordered list of segments and attributes) .  The loader constructs the Ways from the nodes and segments and loads them into polyline or polygon layers. It also loads those nodes that have attributes, as a point set of data.  Attributes in OpenStreetMap are based on key value pairs  such as k=”highway” v=”motorway”, you can think of k as the attribute name and v as the attribute value.  The loader creates attribute fields for all the “standard” key values and loads them into a separate table that can be joined easily to the nodes and ways based on id.  The non-standard key values are loaded into a separate table. with a node_id,name and value columns.  You can use a relate in ArcGIS to join these to the ways or nodes.

The tool runs as a geoprocessing tool from the toolbox, and was written so as not to use any more memory as the data volumes increase, so it should carry on working even when all the tiger data is imported.  the OpenStreetMap data is usually downloaded as a compressed XML file.  This tool does not use an XML parser, as in the past the data has not always had the correct utf8 encoding, also you don’t need to decompress the data before hand which should save a bit of space.  If you are interested in different tags then you can just edit the script to have a different list of standard or ignored tags.

If you’re interested in exploring the data give the loader a go and see what you can do with  OpenStreetMap.  You can download the loader from ArcScripts.

I was lucky enough recently to see a great demo of the giant TouchTable, and I had forgotten since seeing it 3 years ago at the user conference, what a very very cool piece of kit it is.  With microsoft recently announcing their forthcoming Surface platform, I got to wondering what would be involved if i wanted to build one myself.  The thing that makes these devices so compelling to use is the multitouch and collaborative aspect of the interfaces.  Pretty much all the devices that are available to consumers such as tablet PC’s, or businesses such as the smart boards and kiosk PC’s are single touch only.  The devices just emulate normal mouse interaction with a single cursor.  The touchtable, surface pc and perseptive pixels devices all allow you to interact with multiple touches at the same time, either both hands or several people at once.  This bit is the most challenging, both from a hardware sensing, and a software processing perspective.  I’m not too sure how the touchtable and perseptive pixels stuff works, but the Surface PC works using IR sensing, as decribed here

Anyway, after a bit of searching I found this site dedicated to building your own touch table using the same principles, there is a diy guide to putting together the hardware, as well as links to an opensource effort to build the software to interact with the hardware.

All I need to do now is think of a good reason for making one!