Wikipedia vs Openstreetmap

Hi all,
This is going to be a sorta longish post discussing pros and cons about two projects Wikipedia and Openstreetmap, also sharing the little mapping we had done at COEP last week.

To ride through the point, the first is an infographic I was made aware of last week by a certain Jen.

I would encourage people to see the infographic in its own window/tab and by going to the source to see it in all its glory. I do have few concerns which honestly does not give a good picture of the whole thing :-

a. Wikipedia vs Encloypedia Britannica :- Many people have made this out as a David vs Goliath kinda story in which Wikipedia has won. I think that it is a wrong conclusion as everything is contextual and you are basically comparing apples to oranges. Britannica was first and foremost a dead-tree compilation of body of knowledge where as Wikipedia is not. The only example I could find was Rob Matthew’s attempt at taking the whole hog of features articles printed way back in 2009.So in essence right from cash flows, business proposition and works are different. Sadly, while the online wikipedia is alive and thriving it is sad to see not a single compilation of printed articles being even mentioned by someone let alone a business. This gap between having a dead-tree version of body of knowledge (wikipedia or some other resource) and the online version is going to have its effects as well. The only other way is through PediaPress atm which again leaves much to be desired. Another effort I know about is this but that is just for German Wikipedia.This understanding is just not there with people who are making those comparisons.

I do think however, that if Encyclopedia Britannica had outsourced some of their printing and other related infrastructure from either India or/and Sri Lanka it would have been able to save itself and still serve both the western markets as well as the upcoming Asian economies.

b. As a contributor I find the Wikipedia web-based tools leave much to be desired. This can be read by the core ‘1400’ people who do most of the handiwork of/for Wikipedia. To take a concrete example one of the things I like/love about the wikipedia are the Infoboxes and in that I like/love to see if versioning of FOSS softwares follows whatever the upstream (by upstream I mean the product/project) has going for. If you are into FOSS, you know that most of the products and projects have at least two or more releases always on the table. One is the released product/project for which bug fixes and maintenance cycle is gonna be there, one would be a developmental version which would come in the near future and perhaps one a little later. See Mozilla’s Firefox as an e.g. of this workflow. At the moment what you have to do if you want to mimic the workflow of what is happening in the project in the Infobox you have to make another page and link it to the Wikipedia article in question. You can Firefox’s page and especially look at the Infobox at the right. See the entries for the Stable and Preview releases. If you click on the +/- for the stable release for instance you see this . If you edit the page you get this :-

| article = Firefox
| latest release version = 11.0
| latest release date = {{Start date and age|2012|03|13}}
}}{{Citation|url=|title=Mozilla Firefox Release Notes|date=March 13, 2012}}

→ Back to article "'''[[Firefox]]'''"

''Please do not update this template to the next version until the release notes [] for that version are available.''

{{Template reference list}}

[[Category:Latest stable software release templates|Mozilla Firefox]]

[[ja:Template:Latest stable software release/Mozilla Firefox]]
[[ko:틀:소프트웨어의 최신 버전 (안정판)/모질라 파이어폭스]]
[[pl:Szablon:Ostatnie stabilne wydanie/Mozilla Firefox]]
[[zh:Template:Latest stable software release/Mozilla Firefox]]

Now I’m not gonna run through the code listed as it should be easy to understand but this is not an efficient processes. What would have been nicer and better if a bot/parser gifted with the intelligence and looking for few keywords was looking at the Mozilla Firefox’s release channel and then just update the version numbers as soon as they change, it would just make things slightly better. I am not talking of any deep things but basic simple things which should have done fixed long time back. I have seen similar bots in action but have no idea how much Wikipedia uses bots and relies on them for such kinda service.

c. There are more worrying trends that I see there. For instance the bit about libraries being used less, that is bad bad. I am somewhat of a history buff which I realized later in life when I was able to read some of the interesting books about places and people rather than the boring history books we had in our schools where mugging up was/is the answer to all the questions and one is not supposed to either interact or deviate as history as a subject is/was sacrosanct,add to that the use of selective history as a tool to subvert the masses.A case in point, around 2-3 years ago I read the account of Stasi in Henry Porter’s Brandenburg. The article on wikipedia de-humanizes what East Germans must have gone through. First Hitler and then Stasi,I feel lucky that I am not born into either of those periods and in East Germany.

d.The other worrying trend is if information is not on Wikipedia then people stop searching. This again I feel is not good because Wikipedia is not and does not want to be everything for everybody. Having a diverse and different data-sets are good  for the enrichment of Internet, competition to Wikipedia and general welfare for all. If people are using and going to use as the end-all for research that is really bad. I think many people do not really see what the class of Article it is nor do they see who edited a particular page to see people who have interest or bias to maintain the page. Other issues/problems such as sock puppet accounts and other malaise’s are also part of Wikipedia’s growth. Also information based on third-party sources is tricky and at times may not be correct/factual. Add to that the whole Inclusionism and Deletionism . Simply put, there is much more going on behind the scenes than what meets the eye.

I also have a small criticism against opensite as well. It would have been nicer to have both a machine-readable and human-readable image as well, .JPG is a dumb image and it would have been far nicer if we had an image that rendered the data which could be picked up/hotlinked to in some meaningful way.

Now let’s focus on another project called Openstreetmap. The few similarities between them is they both use crowdsourcing or user-generated content, also the fact they have been in the public domain for similar timelines, Wikipedia 2001 while OSM was launched in July 2k4, another thing is both of them use databases quite a bit, an interesting project for both of them is the use of semantics in their individual projects .

Now OSM (Openstreetmap) is a slightly different beast. Not many people know that Google Maps is not FOSS, meaning I cannot sell or reuse the data as I see fit and google gives me very limited rights what I can do with the data (similar to other commercial organizations).An interesting story on the same lines was known sometime back. OSM is subverting it giving/sharing data right now CC-SA 2.0 (Creative Commons, Share-Alike) and eventually moving to ODBL License .I’m not going to go into the merits and de-merits of either licenses or how they affect or will affect OSM but simply state that they are trying to have and get more contributions with the move. You could find some of the reasons listed on the wiki.

Now OSM as a software project is in a similar shape as many of other free software projects. Documentation is still an issue both on the site as well as on the roadmap. There are quite a few tools that OSM uses and it’s a constant play/catch-up between OSM and the various tools used in it. For instance one of the well-known tools which is used in OSM is known as JOSM, another well-known one is Potlach2. You could say that OSM is JOSM/Potclach2+mapnik/cyclemap/Osmarender+osmosis/osm2pgsql+user contributions and you wouldn’t be far off. Now the difference between the two projects are as night and day. JOSM for reasons unknown to me still survive on Subversion and are not that great on documentation whereas Potlach2 has moved to git and is better off in the process. Now there are people who use potlach2 most of the time while some people swear by josm. The benefit of JOSM is that you can make maps without Net access whereas for Potlach2 you need to have Internet access (as you do live editing).

Let’s move on to the last week’s Mapping exercise that some of did in COEP. A bit of background info. is in order. Praveen had put a call for routing bus routes using OSM about couple of weeks back.After a bit of back and forth it was decided to meet at COEP on Sunday and do a bit of mapping in the evening. So with the grand total of 5 people we did some mapping with just one device between us. Gautam just opened an account and uploaded an edit to OSM. The big issue for us was getting good connectivity. It was cool for me as I hadn’t seen COEP boat club side wing in as detail as it was possible that day as we were mapping it. Also took the opportunity to see the renovated Main building (which is a World Heritage Grade 1 structure or so I was told). Apart from the connectivity issue another issue we had is less number of presets. Now if one goes on the site and looks, you would see a larger number of plugins, Presets and styles which would have made the job complete but to do that you need to put some time on it. Also I have no idea as to how much compatible the Presets are with the current rendering engine. Either way we did have fun and learned a bit about OSM. As anything else there are lots to develop and hack, either on OSM or on any of the tools. I would end the post by sharing a post I read a few days back which also serves as a nice backgrounder on the subject as well. I do have to mention though there are quite a few cool things you could do if you are interested in mapping at all, geocoding and Geomapping/Geotagging are two things on which people can get very easily addicted if they know how. Did I mention the point that doing any mapping using the phone is also battery-intensive. Sorry but that’s it for now as I exceeded my own set limits 😛

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.