Gnunify Day -2

This post attempts to state the events on Day 2 of GNUnify 2011

First up, as yesterday the tracks or sessions I would have attended if the wikipedia track would not have been there.

Not seen but curious about sessions :-

a. Dum ka biryani (make files) Shakti Kannan :- Now dunno really much about make files but have done enough of .configure,make and make install to have some rough/perhaps incorrect idea what happens of what happens when you do that. I usually prefer/use GNU Autotools and rarely CMake or some of the other new-fangled make routines. I like the GNU way because its verbose in error output and that’s what I started with.

1. Configure :- it basically assesses your system to find out if there is everything in your system to run the program that you want installed. It will show errors if there is something that is missing (one of the deps.) till you do not correct it. At the very end it will either generate a make file or add some code to the Makefile stating all is

2. Make :- do a make if it’s a new program or a make clean beforehand.

What ‘make clean’ would do is clean the build environment/area of any left-over build files from the previous make instance.

The make command in itself will do all the linking required for the program, make the program executable and also put the program where the local user can run it .

3. Make install :- Make system-wide installation. Generally preceded with a ‘sudo’ as its only the administrator or administrator role people who have the rights for doing a make install.

While I have obviously simplified (a lot) the whole process, it would have been interesting to understand things a bit more. While perhaps would never be a developer could have been useful for troubleshooting when doing/running a troublesome makefile.

This is of course, not for newbies as most of the programs are available or should be available in either a .deb or an .rpm package .

I only use it only if there are packages which my distribution does not yet have (along with RFP – Request for Packaging bug posted) and helping the developer along so people can use the package even if it’s not in the distribution.

I dunno much about rpm but for things to get into Debian it has to abide by Debian policy, be linthian clean and several other things (all obviously good and needed but also makes it slightly harder for the developer/maintainer to get it into the system).

b. TUX Under the hood – Suchakrapani Sharma :- Don’t really know what the talk would have been about, either the Linux kernel or the whole GNU/Linux, the Gnu tools, the kernel, window manager/s , desktop utilities the whole she-bang. Didn’t go hence don’t really know.

c. Going native with Android – Vivek Khurana :- Probably would have been VK showing off Android and the fun things one could do/play with it.

I have not had the opportunity to play with it but do know that it does have lot of tools taken from the GNU/Linux stable and an implementation of Java called Dalvik resulting into .dex files as executables. How good,secure,stable the OS is have no idea.

It has ‘Apache 2.0‘ as its license which basically makes the software sit on the cross-roads of both FOSS and proprietary licenses. Good or bad is what prism you look at. Simply from the company’s perspective they get the flexibility of doing things either way.

d. Open LSM by Arun Bagul :- I had heard of this project, the idea is basically to have a nice web control panel. Probably something which replaces cPanel (proprietary solution) making things easier to use and trouble-shoot issues in the long-term.

The trick/problem most probably would be convincing both site owners,hosting providers which are large communities to try an alternative and while site owners may jump ship if given a nice UI and better controls, the hosting providers community is usually a conservative one (it’s the nature of the job, hosting stuff 24*7, 365 days a year.)

I would have loved/liked to see the control panel on some site to see how the UI looks and perhaps see how telnet and other things work . Dunno if the project is still in infancy or has it become matured/stable which can be/is used in wild. It would be interesting to know from either those who attended it or/and the speaker himself where he sees it.

e. RegEx Gaurav Pant :- I know a bit of grep and a bit of ‘.*’ (glob) stuff but that’s it. While RegEx or ‘Regular Expressions’ would be useful more in shell scripts as well as programs for doing x work. I do know that if you are a good RegEx user you could (creatively) make programs in half the number of lines then if you had not used RegEx. It is most useful for programmers but could also benefit people doing something or the other on the terminal.

Similarly would have been the workshops scenario but alas that was not to be. Wish I could have split myself into two, bah 😛

Sessions I attended :- This was the fun stuff I saw :-

a. Introduction to Wikipedia by Anirudh Bhati :- Anirudh started with some brief history for starters and then quickly came on the editing page link (the top-most one) and showed how an article can be edited. Something like this . The article I have linked may be controversial so please be careful.

Anyways Anirudh then proceeded to show of the New Article Wizard page . I don’t remember whether he did show the wp:create page or not but do remember him not showing the Requested Articles page which new users can put/dip their feet in. The downer of course of that page is that it’s so big/large that can intimidate a new user easily. I also do not remember if Anirudh showed or didn’t show the Wikiproject India page .

The problem (for me) is always which pages to show and which not especially with new Wikipedians as there is a lot of content which are in various stages of creation and maintenance and it’s not easy for newbies to be able to take a line.

Couple of interesting conversations happened during the presentation .

1. Somebody from the audience, a woman (bad with names,sorry ma’am), wanted to make a ‘paper honeycomb’ article . Fortunately or Unfortunately the article is there, she wanted to add some content to it. Anirudh showed her the registration page and showed how the credentials flowed in or otherwise how the dynamic IP shows up in the history of the page.

Then dunno how the conversation veered towards creation of her own page. Don’t remember whether Anirudh signed in or not but he did make a stub page for her stating that she is an IT professional working from Pune, India. Within minutes the page was scratched (by a bot of course) as it didn’t meet/fulfill the notability rules and guidelines.

He briefed a bit more and then quickly ended his presentation.

b. Wikipedia in schools project by Nikhil Sheth :- Nikhil literally jumped on the stage and was very enthusiastic about the project (almost like a child). I have to admit I am/have slight cynicism about the project as it is pretty ambitious and has too many challenges to face. Right from getting content in and making it appealable to as many kids as possible.

Nikhil showed off this page. He showed the past/previous attempts and how he enjoyed the kiwix openzim format.

We discussed various issues right from having content from simple wikipedia to having more content (the hindi wikipedia is only half filled at 2 GB) to having content in concurrence with local cultural norms, ideas . Ashwin added the Wikipedia for Schools Indian Version stuff to the things to do

There were various proposals like adding FOSS tools and having a compilation which value-adds the whole.

While discussing the above was unconsciously reminiscing about the ngo-in-a-box solution I know/knew.

Now the problem with both this proposal and the ngo-in-a-box are/were they need regular maintenance and upkeep. Historically, FOSS tools have had a tremendous change curve with having say a big release every year with changes either in the UI (User Interface) or some core functionality changed so one needs to constantly reinvest some energy in using the new functionality or/and UI change. This is again both good and bad depending on how you view it (Anyone still using IE6 ? )

Just take Firefox or Google Chrome (browsers) as simplified examples as to changes and release history of both the browsers.

The solution of this would be both in kiwix (the reader) and the open zim format and the collection of tools. Of course as of today, all of the zim tools are in infancy. I haven’t played with any of them in any depth but just seeing the history of the tools tell me they are just in the beginning of their lives.

One of the ideas I have/had was to have/add some kind of svn,bzr,git to the collection of zim tools. Something similar to apt-offline. While apt-offline would be for a single machine, the same concept for a wider network would be something similar to apt-catcher. What would be needed is that it syncs beautifully and does have some good change notes when things get updated. Something like this for instance :-


$ svn update
U src/CZone.cpp
U src/CNPC.h
U src/CEditor.h
U src/CZone.h
U src/CNPC.cpp
U src/CEditor.cpp
U src/interactionpoint.h
U src/CLuaInterface.h
U src/interactionpoint.cpp
D data/arinoxDungeonLevel1.spawnpoints
D data/zone1.spawnpoints
D data/zone2.spawnpoints
A data/zone2.init.lua
A data/arinoxDungeonLevel1.spawnpoints.lua
U data/quests_hexmaster.lua
A data/arinoxDungeonLevel1.init.lua
A data/zone1.spawnpoints.lua
A data/zone2.spawnpoints.lua
A data/zone1.init.lua
Updated to revision 765.

and then using a log to see :-


/dawn-rpg$ svn log |more
------------------------------------------------------------------------
r765 | arnestig | 2011-02-17 00:24:22 +0530 (Thu, 17 Feb 2011) | 2 lines

* Adding world position (x + y) in the top left corner when in editor mode. (DAW
N-110)

------------------------------------------------------------------------
r764 | arnestig | 2011-02-17 00:12:58 +0530 (Thu, 17 Feb 2011) | 9 lines

* A new file is added for each zone called .init.lua. This is where the zone is now initiated.
* the init-file calls the new spawnpoint file called .spawnpoints.lua. Here is where all the generic NPCS (by generic I DONT MEAN quest mobs, vendors,questgivers - but normal mobs that doesn't interact in any way. just being there).
* NPCs can be added in the editor now. Press F1 past the collisionboxes and you will be able to place NPCs in the game, using the same technique as before (ENTER to place).
* Deleting NPCs work the same way. Point the mouse over the NPC and press [DELETE] and it will be erased from the zone.
* Moving NPCs work the same way as moving objects. Click on the NPC and move with the arrow-keys on the keyboard.
* NPCs will by default have 180 seconds respawn, have respawn set as default and
be HOSTILE (!). This is because I think that normally adding NPCs we want them to be hostile towards us. You can simply change this later in the spawnpoint-file to whatever attitude you want them to have. You can also name the NPCs in that file using curNPC:setName(".....").
* Saving the zone will also save the NPCs that are added to the zone. NOTICE! It will not write the NPCs that are quest givers, shopkeepers, quest monsters, etc .. Just the normal generic NPCs. So your questmobs or vendors etc, will still have to go into the quest/init-lua file.

------------------------------------------------------------------------
r763 | arnestig | 2011-02-14 22:47:03 +0530 (Mon, 14 Feb 2011) | 1 line

* We now check for existence of the savegame.lua file before trying to load. Als
o indicating if there is a game to load in the menu by drawing the "Load game" menu item red / grey when no game is available for loading. (DAWN-107)
------------------------------------------------------------------------
r762 | arnestig | 2011-02-14 22:27:11 +0530 (Mon, 14 Feb 2011) | 1 line

Both the subversion snapshot and the log are of a GPL RPG game called dawn-rpg and sometimes help out in documentation or bug-posting/triagining in free time.

The idea is basically that even if one machine is capable of connecting to the network, can update the content and then put it to rest of the machines. Now this again depends upon whether it’s a single machine (single server, multiple clients) or multiple hosts running independently.

Now Kiwix could include that and have people input specific url’s to sync and read stuff (something like one can do with deb files and /etc/apt/sources.list) and you have a solution and have multiple gardens making it not just for wikipedia but for any Mediawiki installation which uses and/or records content in openzim format having a much wider reach for all sorts of content making kiwix and making openzim ripe for becoming part of the content creation life-cycle. Not a developer hence cannot add value but just a thought 😛

The idea overall is to have all such tools in a customized Debian/Fedora distribution which would make some noise. It would make things so easy.

There were also discussions as to whether people should or should not charge money for doing such work as kids/students may disrupt while fooling around with the system/s and they may need guidance.

While personally would encourage anybody to fool around with the system (and to learn to live with the recurrpersions of the same) the same cannot be said for doing the drudge work of installing stuff again and again. If you encourage people to tweak and play, there are also equally good chances they may discover or add something of value to the OS/software as well.

I guess it all would depend on both the person who is doing the work and his/her relationship with the school/s and how they work it.

There were probably couple of more points but that’s all I remember from that session .

c. Wikipedia mobile : past, present & future by Tomasz Finc :- One area where Wikipedia can really take/kick up the storm is the area of mobiles. While Tomasz (z is silent I *think* in there) was looking at smartphones as now and the potential future.

One of the things where smart phones can be useful from Tomasz point of view (and I agree) would be in content creation in the form of photos. Taking multiple photographs of some place (historical,touristy whatever) which results editors into making an article stub therein starting an article creation and maintenance cycle.

One of the queries I had was of having multiple photos for the same place and apparently Wikimedia does not have a problem with that.

The interesting point was/is that some of these smart phones also have GPS built-in (with more such chips going in for future) and coupled with latitude,longitude and time of day lots of interesting meta-data is generated .

When the semantic wikipedia comes the whole collage would be very interesting to see. Think of being able to see say the ‘Taj Mahal’ or ‘Gateway of India’ or ‘Bibi Ka Makbara’ or literally thousands of such historical monuments, places of tourist interest from multiple angles. Huge huge upside, the only downside from wikipedia is having huge server farms for duplication .

What I disagree though is with Tomascz’s idea of people using the smart phone for doing wiki editing on the go. For me three-four things stand in the way of having something like this in a happy/good way :-

1. Phone affordability
2. Screen size
3. Playing on small-sized QWERTY keypads
4. Internet connection/connectivity

As of now smartphones are beyond the reach of the common man. Perhaps in another 5 years or so. Also the fact that Samsung has somewhat of a monopolistic control and pricing (60% + market share as well as production volumes) over LED displays which *are* the only real contenders in displays it is going to be a very slow uptake.

Note :- Historically, proving monopolies and the pricing discrimination they enjoy and power they use and abuse hasn’t been good for consumer. Take the case of The European Commisson which slapped penalties to the DRAM chip makers in the past which later on was overturned by the WTO. In essence the big boys have nothing and nobody to fear.

Something for those who might be interested to see things in this light .

Personally, I would not really engage with the small mobile device for editing an article, the talk page being informal could perhaps could be considered/used. I don’t see it being used for extended time periods though.

Although I do hope I turn out to be incorrect as it would make me happy if more people do join the bandwagon, doesn’t really matter what device or how they do it. 😛

Anyways that was his big thing and he was very excited about it. He did point out that while the iPhone has an app. the Android version isn’t there at all. On that somewhat pensive yet positive note he ended the presentation.

d. Wikipedia in Indic languages by Hari Prasad Nadig :- It was supposed to be Ashwin/Mandar but for some reason Hari decided to do this one. I have no background info. on this and it really doesn’t matter.

Hari made a sort of ensemble presentation where he just didn’t talk about wikipedia but also the other concurrent projects such as wikiquotes, wikisource,wikitionary all of which have a very bright future. He also shared about wikiversity but was too brief about all these projects.

In hindsight, we should have had at least 2 hours for each of these projects because all of them are unique but also duplicate some other community projects as well and it’s not easy to really understand the differentiation and the motivations for these projects. We did discuss some of the things in each of them .

a. Wikiquotes :- It is beautiful thought, idea of having quotes on any topic handy for stating one’s point of view. In GNU/Linux distributions there is a package called fortunes which does something similar.


$ aptitude show fortunes
Package: fortunes
New: yes
State: installed
Automatically installed: no
Version: 1:1.99.1-4
Priority: optional
Section: games
Maintainer: Joshua Kwan
Uncompressed Size: 2,851 k
Depends: fortunes-min
Recommends: fortune-mod (>= 9708-12)
Provides: fortune-cookie-db
Description: Data files containing fortune cookies
There are far over 15000 different 'fortune cookies' in this package. You'll
need the fortune-mod package to display the cookies.

The problem with that package is its not updated (enough), the last change either to the database or to the package itself were in 2004 and 2009 respectively.


/usr/share/doc/fortunes$ zcat changelog.gz |more(Note: this file has been re-arranged to be in reverse chronological
order, which is The Right Thing for ChangeLogs - DLC)

March 05, 2004 (fortune-mod-1.99.1)

Most of the changes have occurred at some point in time in the last 5 years.

A high number of spelling, punctuation, formatting and grammar fixes.

internationalization support.

New -c option to see which file a fortune came from.

and the Debian Changelog


$ zcat changelog.Debian.gz |morefortune-mod (1:1.99.1-4) unstable; urgency=low

* Take bulk attribution/typo corrections from Simon Danner's git. Thanks so
much! Saves me some work for sure.
closes: #411907, #400232, #502483, #497057, #497060, #385408
closes: #527198, #445470, #476770, #500282, #369662, #363095, #386503
closes: #501748, #498932, #491815, #485388, #361896, #514243, #476772
* Soften dependency chain between fortune-mod and fortunes-min. In this case,
fortunes-min now recommends fortune-mod, while fortune-mod recommends
fortunes-min | fortune-cookie-db. closes: #529065, #542935
* Add some new fortunes. closes: #432849, #416290, #411079, #390434, #373817,
closes: #359311, #350838, #347945, #546644, #521421, #499068, #376182
closes: #410254
* Add the 'tao' fortunefile, closes: #228930.
* Fix Mark Twain attribution, closes: #514144
* Correct some chess moves, closes: #388167
* Change dependency on xcontrib to x11-utils. closes: #462625
* Move fortune menu category to 'Applications'.
* Delete strange postinst that created /usr/local/share/games/fortunes/off.
If it fixed a real problem, I would like to rediscover it to figure out why THAT was the solution so many years ago.

-- Joshua Kwan Tue, 29 Sep 2009 15:38:11 -0700

Also it doesn’t have a good interface, only a CLI front-end, something like this

For instance :-


$ fortune -m humor |more

%
In America today ... we have Woody Allen, whose humor has become so sophisticated that nobody gets it any more except Mia Farrow. All those who think Mia Farrow should go back to making movies where the devil gets her pregnant and Woody Allen should go back to dressing up as a human sperm, please raise your hands. Thank you.
-- Dave Barry, "Why Humor is Funny"

I actually deleted part of the parsing and some of the more (subjective) offensive (/subjective) quotes but gives an idea. It is a lot of hit and miss affair until and unless you know your regexp well and can frame queries.

What could be done with wikiquotes is package it, get a nice UI to view the quotes off-line and perhaps have also some nice graphic/drawing along with it. It would be quite a bit of work integrating all that but it would make people really enjoy wikiquotes who are not connected to the net as well. This could be done both for Windows and GNU/Linux which would have many more people using it and consequently contributing back to the source.

b. Wikisource :- This one somehow fails to differentiate from other projects I know, Project Gutenberg, Archive.org’s Open Library and Google Books to name a few of the project.

They need to really differentiate or tell what they do better or how they add value to whatever is existing atm. There is and perhaps be lot of overlap with existing projects. It value-adds only if there is *new* unpublished content which gets into public domain, something like Wikipedia GLAM project (a part of which is being enthused by Bhishaka, Ashwin in the Indian scenario) happens.

c. Wiktionary :- This would be perhaps the closest to my heart after wikipedia. While I haven’t done any edits in that one (you need/should know about etymology of words and where they came from and stuff like that) I do tend to use it quite a lot. Just like Wikiquotes it would be great if we could have a dictionary,thesaurus which was free for off-line use. To keep memory and disk space down, we could have collections for specific domains, say ‘Engineering’ or ‘Computer Science’ and things like that. It would really add value there.

It also may have side-benefits of helping in improve OCR as well as IIRC they also need a bank of words or something to do recognition. So *it* may help.

There is/was no dearth of ideas, the only problem so as to see is how many can be done and how they can be achieved. While some can be volunteering efforts, some would need some funding and dedicated time to make it a reality.

Overall, I enjoyed myself thoroughly that day.

Note :- In the past, Symbiosis had said they would put up the videos, till date it hasn’t happened. This year too they have promised and at least on the encoding side things have been easier although uploading stuff (namely bandwidth) is still a challenge. It would be cool if they are able to solve it.

2 thoughts on “Gnunify Day -2

  1. Dear Kartik,
    Indeed I was able to meet him but at the fag end of the second day. We went out for couple of beers where he did express an interest about doing a key signing party.

    I do not have a lappy and in fact I need to discard my old key and build a new one as have a new desktop but just been lazy. Also unless you are a developer or into privacy,security,hacking or simply paranoid where you encrypt all your messages with your private key and somebody else’s public key it has limited to no value.

    As far as haskell is concerned, just have no idea (as being a non-developer) although do check out his blog posts as part of p.d.o.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.