ESRI


You can now download the free OSTN02 transformation tools for ArcPad 7 and ArcGIS 9.1.  To download it you need to log in to http://www.myesriuk.com/esriuk/members/downloads.asp.  Its listed as OSTN02. 

obviously you need to be registered with myesriuk.com to do this.  The download adds a new transformation to ArcMap that you can use when using OSGB36 based data It  also has an installation to support OSTN02 in ArcPad 7. 

We tend to take storage for granted these days, as you can pick up a 1/2 TB disk for about £100. which considering the first computer I owned was a ZX81 with no storage other than a cassete drive and an 8K ROM is pretty amazing. However there are still plenty of ways to use storage up, one way that works quite well is to format your drive with the wrong block size and then store an ArcGIS server Map Cache on it.

My laptop is currently running Windows vista, but for ArcGIS server stuff I tend to use a windows 2003 Virtual server. I was doing some experimenting with performance and using different image cache tile sizes and swapping between fused and Layer caches. One interesting thing that this highlighted was the fact that the disk block size can have a huge effect on the amount of storage taken up by the map cache.

The Virtual server I was using had a default virtual hard disk that has a block size of 4K this is fairly standard, although you can vary it when formatting the disk, for example my media server at home has 64K blocks as this gives better performance when accessing large files.

The Default size in ArcGIS server when creating a map cache is 512×512 pixels, for the vector data and rendering I was using these came out at about 5-10Kb in size. To compare performance I created the same cache with 128x128px tiles. I was expecting that in general the total size of the cache would be slightly bigger, because there would be more images (16 times as many), but they would all be 16 times smaller in pixel size and maybe 10 -15 times smaller in file size. There is going to be some overhead in image headers and maybe the png compression would be slightly less efficient, however for the same data at the same cache levels, the information content and total file size should be much the same, regardless of tile size.

In reality what I found was that the 128 pixel tiles took up almost 10 times the total storage space than the 512 pixel tiles. The reason for this was the disk block size was too large. The average size of each 128px tile was about 400 bytes. however with the block size at 4096 bytes, each file occupies a whole block on disk which means most of the space in each block is wasted. The data I was using was an extreme case, The data was very sparse so a lot of the small tiles were completely empty but were still occupying 4K on disk. This is compounded if you build a separate cache for each layer with small tile sizes, as each tile is much less likely to have any data in it but will still occupy a whole block. If on the otherhand your data contains a lot of detailed imagery, smaller tile dimensions will not be as inefficient as most tiles will contain information regardless of how small they are. They will still take up plenty of space but at least you will be using it efficiently. The key is getting the block size large enough to contain the small tiles without wasting too much space.

So the moral of this story, is that in order to make best use of the storage that you have you should format the disk with a suitable block size for the data that you will store on it. If you expect to have a lot of small and possibly blank or near blank cache tiles you should use a small block size such as 512 bytes. If your data is dense, and your tiles larger then you can get away with a larger block size. Normally for data you don’t really have to worry about the inefficiencies of disk block size as the numbers are so small, but if you are building industrial size map caches, the number of tile images can be in the 100’s of millions. Use the right block sizes and the cache can provide huge performance and usability gain, use the wrong block size and the gains will be at the cost of a lot of wasted storage.

Well I haven’t blogged anything for ages, as I’ve been a little busy finishing up my current project, and pretty much onsite most of July, but hopefully things will calm down a bit now. I’ve just been trawling through the UC Q&A pages seeing what all the lucky people who are off to the user conference will get to see. (Not that I’m jealous or anything!). Everyone seems to be focusing on the new licensing/tiering of ArcGIS Server. I think the change is a good thing, though it will take a while to get used to thinking about 1 product “ArcGIS Server” with multiple levels rather than all the different individual products (ArcSDE, ArcGIS, ArcIMS etc) as we do at the moment. Some people have complained that its now even more complex, but i think once you get used to it its actually simpler, and its going to be a lot easier and more consistent when trying to explain to potential customers.

Anyway to the point: I discovered this little gem in the Q and A section that seems to have been overlooked in all the Server discussion. It seems that ESRI will be supporting an open source DBMS in the form of PostGreSQL, this support is currently in testing and due for release sometime after the initial 9.2 release. You can read about it at the Q&A section under the Geodatabase and ArcSDE section (Q11).

I’d heard rumours of this, and noticed a few bugs logged in the 9.2 system that mentioned Postgres, which pointed to something happening, but its nice to see something public about this, It will be interesting to see whether people actually choose this platform to run an enterprise Spatial database with ESRI technology, or whether organisations that adopt an open source database will be less inclined to use it with commercial software.


Technorati : , ,

I see that Rob Elkins gave a good link to the Data Interoperability Extension on his blog, which reminded me that i keep meaning to write something about this.

fme_workspace_20050331_bg.gif

Data Interoperability Extension or DIE as we are not allowed to call it, is by far and away my favourite ArcGIS extension. I have used it extensively in the last two large projects I have worked on. Not only does it have great support for a wide range of data formats, but it can also go a lot further than just reading data natively. You get a full support for defining custom transformations, which allow you to convert data not just between formats, but use a whole bunch of “transformers” on the way. The workbench application gives you drag and drop support for these transformers, allowing you to do things like field manipulation, string transformations and geometry calculations. Once you have built these custom transformations, you can embed them in geoprocessing models so that they are really well integrated into ArcGIS. If I was only able to use a single ArcGIS extension the DIE would be the one. The only confusing thing I find about FME is the number of different license levels that you can have. If you get the DIE, then this is an extension that you buy from ESRI, however if you need additional formats or functionality, you can extend the extension by purchasing additional licenses from safe software. This page outlines some of the options you have with DIE and Fme extensions. Basically you have three options, You can use the standard Data Interoperability Extension, on top of which you can add the snappily named FME For ArcGIS Format pack for ArcGIS Data Interoperability, which builds on the ArcGIS Extension with additional format support. Finally you can go the whole hog and use the full grown FME for ESRI, which gives similar integration with ArcGIS, but also the standalone workbench application. When you combine this with the support for geoprocessing, especially in 9.2 with server and engine support, there is almost no type of data that you cannot support with ArcGIS.

Technorati : ,

explorer_logo.gif

Since James Posted all his screen shots of ArcGIS Explorer Beta 2 it seems like everyone has been getting all excited about whats going to be in ArcGIS Explorer. At the moment its still in private beta so there have been more questions than answers. However there is a reasonable amount of information posted on the ESRI website about ArcGIS Explorer. You can find it funnily enough in the ArcGIS explorer section. There is an FAQ which should answer a lot of peoples questions, things could still change before release, but it gives a good idea of some of the cool stuff you could do with this. Theres also a couple of videos and a bunch of screen-shots in addition to those shown below.
Reading the FAQ, theres lots of interesting things about ArcGIS Explorer such as support for local data and WMS services, but the thing that i think is most exciting is the extensibility that is available. This applies to both the data and the application.

One of the cool things is being able to publish your own globe and 2D services either commercially or within an organisation. So if the standard imagery is not good enough for you, you could always subscribe to a commercial globe service offering more detailed data, or use a commercial Mastermap service and stream this into the 3D viewer. If as an organisation you are interested in global data, but not necessarily the land based imagery published as default, perhaps you have climate or meteorological data, or Nautical charts or maybe environmental data, you can publish your own 3d globe data across the organisation without relying on the standard published data.

The second area of extensibility is the ability to publish custom tasks, and use the SDK to create custom interfaces to these tasks inside ArcGIS Explorer. The ability to make advanced server based Geo-processing available to the end users is a really powerful way to use ArcGIS Explorer not just for visualisation and pushpin type apps, but to allow users to do real analysis, but still using a simple interface. Things like custom network tracing for utilities networks, routing over non standard data, such a cycle or foot paths, environmental modelling, emergency management and modelling scenarios. All of these as server based tasks initiated by users inputing simple parameters through a task based user interface. I’ve had a quick play with the .Net SDK and there are all sorts of possibilities opened up by this, especially when combined with the support for Geo-processing and models within ArcGIS Sever.

It should be interesting to see what people build with these tools and whether this helps to advance the wider usage of geographic data and services beyond the simple mapping and visualisation that google earth has done such a great job of promoting.

Technorati : , , ,

Graham Lee and Bill Gidley are riding from Lands End to John O’Groats, hoping to raise money for The Macular Disease Society and The Freeplay Foundation To help their families and sponsors keep track of them our Technical Solutions Group (Pre-sales to you and me) have equipped them with an array of mobile technology to keep track of their progress.

They are running a windows mobile device and bluetooth GPS with a custom application that uploads their position every few seconds via GPRS to one of our servers. The data is pushed into ArcSDE and then served out Via ArcIMS. You can follow their progress and sponsor them here

The online map is a bit Where0.2 but this is what their current progress looks like when tracked in a 3D Viewer

Track.jpg

Raster Map Data overlayed on the terrain, with their current track in the bottom left corner.

Technorati : , , ,

Today was very quiet in the office, with most of the team on-site installing stuff, testing and preparing for training, the rest of the office seemed pretty quiet too, and I realised that it is exactly 6 months to the day since we moved into our new grown up office at Millennium House

millenhouse.jpg

The office move was a fairly big deal for those of us based in Aylesbury, moving from the confusing\charming warren of Prebendal house, and the various offices in prebendal court to a large open plan modern office.

At Prebendal House no single room had more than about 10 people in it, so you tended to have fairly close knit teams, and each office would have it own distinct character. It could be quite fun but maybe unproductive in some offices, and very quiet studious in others. Now however Consultancy is all based on a single floor with room for about 60 people, and I think in general it is much quieter, as people don’t want to disturb others, but at the same time you get to see a lot more people more often, and communication between the different groups is much better. It takes a while to settle in to a new environment, I’m not sure that everyone prefers it, and it is maybe a little less fun than the previous office, but i think on the whole it is an improvement with much better facilities and much better integration between groups.

Next Page »