aka ‘Controlled propgagation of data vs ad-hoc chaos’
How do you distribute GIS data across the enterprise?
Centralisation is a great idea. It means that you have all data managed in one place – and supposedly enables that oft used phrase, ‘master data management’.
In a way, that’s the danger behind data mastering – the perception that as it’s in one place, it must be mastered, and that everyone should simply just use it. Whilst that’s partly true, it’s often just not practical. You need to be able to have people access to that data, otherwise you’re not unlocking the real value of it.
For that reason it’s important that there are methods that exist to allow users to discover that information; and then when it’s discovered, it’s accessible.
Metadata is your friend. At least, it should be. It should be possible to discover data, irrespective of its location, and be able to consume, use, analyse, or whatever you want to do with it – add value and make use of it, just as long as you’re not actually changing it without the *custodian* of the data knowing. (That is, the person that owns the data should know who has the authority to change it.)
Once you’ve found the data, you should be able to access it, irrespective of where you are and, ideally, irrespective of the device you’re using it on. Now many organisations aren’t ready to enable this; many companies don’t yet embrace BYOD, and for good reasons in many cases. However, let’s say you’ve got users out in the sticks, for whom they have a poor network connection, and general Intranet/Internet access is poor.
Simply put, you send the data to them.
But hang on, that’s breaking the concept of data mastering, isn’t it? Aren’t you forcing the propagation of data to happen – something you’ve tried hard to combat by centralising the data?
Actually, no. You’re not reducing the validity of the data, nor are you breaking data mastering rules. What you’re doing is scheduling a copy of the data to exist in a location that is more proximal to end users, that they can make use of. In this case the value of being able to use the data is greater than the risk of users producing additional copies of the data that they use inappropriately.
So some of this takes in the relaxation of strict centralisation rules, but it’s all for good reason. What you’re not doing is allowing the ad-hoc propagation of data; instead, you’re using automated scheduling technologies to produce a copy of data that people can use just as if they had full access to central.
In the GIS world, if you’ve got access to Python scripting, you can do this. If you want to do something more dynamic, or potentially more supportable, you can use FME Server.
Why does the Waterfall model still exist?
For many organisations, it’s legacy. It’s keeping in the safe zone. It’s what’s always been done.
GIS is like any other subject area, in that, in moving to a new Enterprise GIS, or updating from one version of GIS to another, or rolling out geodatabases all require a requirements gathering, before an implementation phase. And financial planning always likes to keep knowledge of the total cost, who is going to get what, and so on.
The trouble is, GIS – a little like Data Warehousing, BI and asset management – touches many areas of the business. We’ve spent the last ten years or so understanding why the Waterfall process doesn’t work so well in IT in general. Or, why you can apply standard(ish) project delivery practices to bridge building, or car making. But you can’t apply it so well to modern IT; and specifically, it doesn’t match well to GIS, it being a multi-channel delivery discipline.
A typical Enterprise has many front-ends to cater for, because it has many different user types; all these users have different requirements for having data presented to them in a particular way, because it helps them either make an informed decision, or helps them capture, store and manage information. This means that you need to present end users with an idea of how the system will look to them; how it will enable them to do things they do now, but (hopefully) better; and how they may need to alter their current business processes to make better use of the new tech.
Waterfall just doesn’t do this at all well. Moving from a legacy GIS to a new GIS always means big change. Many users have expectations, now, because they’ve used Google Maps, or they’ve been playing with iOS or Android Maps. They don’t understand why they can’t just scroll about a bit, and their enterprise’s data doesn’t just appear in front of them, as if by magic. They don’t want to have to know who owns this bit of information, or who to call to get hold of this particular shapefile. They want to be able to see something up and working, prototyped, and tell The Project how they want this new tech to work for them.
Am I talking Agile? Well yes, I guess I am. Good and appropriate use of Prototypes in GIS is perfectly possible now. Going into a project with a good but sparse set of High Level Requirements, ready for iterations of development of front ends, is the way to go. In parallel, you can use the time to plan how you’re going to engineer data cleansing, conversion, migration, and cutover. And that can be done using the same approach – it’s no different. Making sure everyone is communicated with, that The Project stays close to The Customer is what’s important here, rather than how many requirements specifications you’ve written.
Enterprise GIS takes a long time. I’d wager that any organisation would need to spend 18 months on an Enterprise GIS Programme to get it there: and even in this case, you should be looking for the establishment of a framework (data governance, data management and data publishing, with a few delivery channels in place) rather than the total solution that remains static for years. By planning this and setting the expectation right in the first place I reckon you’re far more likely to achieve the best results than looking back to the 80s and an outdated delivery method.
Apple’s iPhone Maps debacle is literally that. It’s a debacle. Apple have set the expectation level so high now that anything less than equal replacement is seen as bad. And this is far less than an equal replacement.
People depend on iPhone maps. You might not, but others do. I personally depend on it, and I don’t use it as a turn-by-turn navigation tool. I read maps as if it were a paper map. Yeah, a bit old school, but there are others who do the same. So for me, turn-by-turn navigation is not what I want from Maps.
You’ve got to couple this with the level of functionality and richness of data that Google Maps provided. Google had Search under their wing, which enriched the data on which GM was built. Apple doesn’t have this. It’s a missing link. In this respect, it’s a Data Quality issue.
Nor does it have Streetview. Not useful for me, but will be useful for others – such as people that perform deliveries, or to quickly survey areas. It’s all part and parcel and it’s not just a gimmick.
Google had years to refine Google Maps, and they did it. It was slick, reached the right level of density of information, and was cartographically pleasing. Apple Maps is still new, and in many areas looks awful. In this respect, it’s a Data Visualisation issue.
One thing that Apple Maps does have over and above Google Maps is that it’s vector based. Since CPU in mobile devices is less of an issue than it was five years ago, this should be taken advantage of. It should be possible to render more, directed, user-tailored information than the tile-based model. And Apple needs to know that it’s there that they should present their USP. In that respect Apple need to work towards a superior product – in some respects.
Apple will never get the richness (or perhaps quality) of information that Google has, but they are able to present something different. 3D maps is neat, and will entice users who want to see a real, yet more stylised view of a city.
The solution to this has to be to have GM made available again, then ensure Apple Maps has some reason to be used over and above GM. Only that way will we get to use the iPhone as it should – the right tool, and a choice between at least two, to do the job. As I said above, Apple could make better use of its vector mapping to make something more bespoked to the end user. So, I’d turn to Google Maps if I wanted to search what’s close to me, or to use StreetView. I’d use Apple Maps if I was to see a more cartographically pleasing map (which it isn’t at the moment, by the way), or if I wanted to see the same base information rendered a different way that I prefer.
This is the only way Apple can feasibly salvage this. Apple, admit that your data will never be as rich as Google’s. Make Google Maps available, then make a better cartographic product so that people will want to use Apple for Mapping first. Set the expectation high and you’ve got to follow up.
As we know, Apple Maps will be implemented when iOS 6 gets rolled out. The best review of Apple’s mapping offering I’ve found so far has been on Peter Batty’s blog.
We also know that Apple want to decouple themselves from their dependencies on Google, which is understandable. What’s interesting to me is what the functionality and look & feel will be like once it’s revealed.
Apple pride themselves on their user experience, and I would say that in the most case it’s pretty clear that they’re advanced on Google’s offerings (take Android vs iOS; both are great, but there’s something slicker about iOS). What I’m hoping is that the same is true for Apple Maps… but there may be more to this one than others.
Google Maps, whilst missing out certain items that are deemed necessary (turn-by-turn navigation), offers a pretty good and slick experience. However, for me, the best part of it is the way that maps are rendered. No, they’re not OS maps, but they look decent enough; they have label density about right, they aren’t cluttered, and make use of colour pretty well. They’re also slightly tailored to each nation’s incumbent mapping systems (compare the UK’s with France’s mapping, and then consider OS vs Michelin map styles).
Then there’s the whole vector-based maps thing. It’s amazing how, after being in tile-based land for the past 5 or so years, we’ve gone back to vector mapping. There were good reasons why tile based maps were a great idea, mostly because of the ability to cache content and the ability for the server to sort out any rendering before it hit the client. Now we’re going back to vector-based, are we going to see the issues of the past resurfacing – such as odd labelling – that would only be changed once we have an update to the app (or even the operating system?)
And then what’s the big deal about 3D – really? Do we actually need to see 3D maps when we’re navigating or looking for the nearest pizza takeaway?
For me, to be a good replacement, it’s not necessarily about the functionality, but about the ‘pleasantness’ that the maps bring out. So far, from the screen shots I’ve seen, I’m not overly impressed with Apple’s efforts, but I’m open to be persuaded.
Getting people to think spatially shouldn’t be difficult. However, it often is.
A number of times I’ve seen Spatial Information explained to people in order that they understand the value of geography: the need to determine how near, or far, something ‘A’ is from something ‘B’. There’s a relationship there between the two things that is influenced by their Geography, and yet it’s not always so obvious unless you either see it visually, or the relationship is enforced inside business rules.
Google Maps (and others, of course) broke enlightenment for the first of these, the visual aspect. It made it simple, and mainstream, for the visual relationship to be obvious. But for me, the one that’s still not explored, exploited or understood is that of business rules. Within a company there is often a geographic relationship between assets – or for that matter, anything that an organisation is interested in. The trouble is, as ever, getting people to understand the importance of that relationship. And for me it’s about initially putting it on a map that counts.
As an iterative step, providing visibility of assets via maps first will enable the user base to understand why getting proper spatial correctness is important. Can users see what the problems are? Do we know what the issues really are with data – its inaccuracies, incompleteness and such? Without a map, it’s pretty hard to understand. It’s like the early years of OpenStreetMap: people were initially disappointed until they realised it was getting better through contribution. Reading the reviews of iPhone apps that utilise OSM data, they’ve got better over time as the data has become better. Only, that gets cleaner by contribution and over-time-corrections.
For business data, it’s a different path. Showing the user community the good/badness of map data visually, and then discovering the business rules, has to be a better way forward. It’s easier to spot gaps in network on a map than it is through business feedback. The process of looking for ‘empty areas’, whether by eyeballing or via analysing for areas of unexpectedly low data density, can reveal much more about data incompleteness than simply by looking at tabular information.
Simply checks for lack of data completeness before performing in-depth analysis later is a low cost method of looking for early gaps, rather than trawling through data later when change is expensive.
In other words, look for spatial gaps early when the cost of change is at its lowest.
A quick blog about a big subject.
I was asked today what I thought the optimal number of stakeholders on a moderately sized software development would be. My opinion is somewhere around 7. 7 is often considered the ‘magic number’ as it is about as far as the brain remembers (y’know, the old cognitive psychology bit) – hence why for years it was bad practice to have more than 7 menus on a desktop application; as I’m typing this I’m looking at Firefox with 7 menus on it (File, Edit… you know what they are).
(In fact as I remember, the golden rule was ‘7 plus or minus 2’. Some folk will remember up to 9, others down to 5. But 7 is a nice central number.)
Anyway I can see that there is some kind of analogy with this when it comes to stakeholders. Not only in meetings, but groups you’d meet for requirements gathering as well. Say for instance you’re gathering requirements for a social networking application or new mapping application. Within your room, beyond a certain number of people, you’re going to find people who don’t contribute, or who don’t feel they’re getting heard. Less than this number, and you’re not getting a rounded view of a number of partners. Plus, of course, there’s the inter-relationships between stakeholders, which, as you bring stakeholders together, brings in another dynamic that’s exponentially more complex as you increase the number of participants.
What of Agile? Well as development becomes increasingly Agile – especially so in the current climate where customers are demanding to see progress earlier – it’s more important than ever that there is direct involvement between development team members and customers. After all, they’re all stakeholders together. Making sure that this relationship is fluid is important, so it’s necessary that there aren’t too many people involved to stifle and over-analyse. Otherwise, not only are relationships too complex, but the project is too big.
A little earlier today I posted this very question on Twitter. I got a reply back very quickly saying ‘as few as possible’. Whilst this does make sense, isn’t there a risk that you don’t actually speak to enough people to influence the outcome?
So for all this, ‘7’ seems to be another magic number – stakeholders, be they developers, deliverers, purse-string holders, or users – it seems the right sort of combination. If you go too low, you don’t have enough different opinion. If you have too many, it probably means that your project is too big anyway and needs some kind of decomposing.
Chris Norton mentioned on his blog about the new Facebook Places changes. FB Places will partly revolutionise marketing by the legendary ‘check-in produces offers’ that we always imagined Location Based Services would revolve around.
There’s always the notion that check-in is actually an effort. I use Foursquare and Facebook checkins sometimes – although rarely at the same time, because I have different people on each account. (Facebook is normally friends, whilst 4SQ is mostly other folk interested in geo-things that I know.) But I don’t always remember to check-in; it’s a bit of a faff, so it doesn’t always happen – and then I think ‘I should’ve checked in there….’ long after I’ve left the place.
What I’m surprised about is that geofencing has not been mentioned much with relation to FB Places. (Geofencing is where, rather than you checking in yourself, once you hit a virtual fence, you trigger off an alert with the host.) It’s FB’s logical next step to turn on Geofencing as part of their native mobile apps. Isn’t there some kind of cost model that would have retailers that have a space on FB to create a suitable, standard size polygon around their outlets, that could ping a user when in proximity?
I suppose the question there is, why through Facebook? Why not through Latitude, for instance? Well the obvious reason is that most people are already Facebook users. Even if only a small number opt-in to geofencing, it would still be a large enough proportion for it to make sense. And then of course, the ‘xxx has just checked into yyy coffee shop and can get a free latte’… would pull in more friends, and therefore trade, inevitably.
People have given their trust to Facebook more than other platforms, already. There is more likely to be an adoption of someone knowing where you are, if it’s through Facebook, than through any other website. Far more people will use Facebook Places than Google Latitude, for instance. I’d suggest that enough people have used FB for a sufficient time that they have built up a sense of ‘trust’ with it – whether misguided or not! I would even wager that people don’t care as much that FB is watching you than any other organisation is watching you.
So I’m waiting until FB eventually take the Location Based Services ‘thing’ and do it properly.
I’ve been working with Amazon Web Services for a few months now to get an installation going for a GIS platform for a number of parallel projects. All I can say so far is: it’s simply excellent.
For a long while, there was such a difference between ‘web hosting’ and ‘your own infrastructure’. That is, a small company’s web hosting – maybe with a few apps included, but mostly just a web site, differed massively from a company’s own, large-scale enterprise IT infrastructure. The big point of the cloud is that these two now meet in the middle.
Amazon’s Web Services are a pleasure to use, as long as you’re able to understand the limitations and get your head around the (pretty big) paradigm shift. No longer are you working on something within your own organisation. And that gives you two big changes: you don’t own the infrastructure, but you don’t need to worry about it either. In fact, for most organisations, the big plus is this: you probably have more control over it than if you owned it yourself. Let me say that again – you have more control over it than if you owned it yourself.
What do I mean by this? Well, most organisations have some kind of structure that means that IT changes require (often cumbersome) business processes and gates in place that mean that changes need a large amount of procedure, paperwork, authorisation and (probably) time. When you move to the cloud, you rent the development/testing/prototype/proof of concept platform for as long as you need. So this is the big bonus: you can prove value to the business without needing to go through IT governance to get hold of servers, operating systems, support staff… whatever is apparently required.
We’ve had the Agile ‘thing’ knocking around for a fair few years now, and (hopefully) we know the pros and cons of this approach: stay close to the business, don’t go mad with documentation, know that change is inevitable, present frequently and agree small steps. Now this is where I think the cloud works really well: it’s the ideal platform for a proving ground. And, as requirements are continuously revealed – particularly non-functional – you can expand the platform to fit accordingly.
This is a bit of a high level praise of the concept of cloud computing, but my experience with the Amazon Cloud has been fruitful so far. I’m going to blog some more as my experience continues, and I’m expecting more business benefits and possibilities to become apparent as time goes on.
For now, the path is inevitable and exciting!
Where GIS and BI (Business Intelligence) Mix
What do I mean by this?
And this is where I think GIS will go in the longer term: it will be key in enabling businesses to tune themselves and to make those decisions: where to sell, where to manage, where to concentrate on particular asset types or conditions.
Basic web maps. That’s what many people need from Geographic Information Systems: no more, no less. They want to see some kind of business information in a spatial context, because they need to get some kind of message on location, as well as perhaps proximity, across.
Nothing wrong with them per se, only it’s a perhaps a little… well, old. Very 2002. I used ArcIMS for mapping applications around that time, and it worked a treat – web maps online that weren’t just HTML image-maps were great.
Thing is, times have changed. It’s still possible to have an ArcIMS web map service, but the expectation from the end user is something less GIS, and something more map. Or should I say, something more Web 2.0. The standard has been set, and it’s easy to disappoint, unfortunately.
Yet, it shouldn’t be difficult to produce something that’s a little better these days. Since Google Maps completely changed the face of web mapping, there’s no reason that the newer APIs shouldn’t be used – Google Maps API itself being a great place to start. OS OpenSpace is a great place to go, or indeed, to use OpenLayers. Or even choose flash if you have to.
My point being, the role of a map on an organisation’s web site is often to convey information about the location of services. A user is far more likely to get that information using a (newer) intuitive interface, rather than old style, clunky, unwieldy interface that used to be the norm. Again, nothing against the journey we took to get here via the ArcIMS HTML Web Map – it served its purpose very well.
But we’re a long way down that road now, and it’s time to update.