Experiences in the community

Just another WordPress.com weblog

MiniDebconf 2011 Day 2

This longish post would attempt to chronicle some of the experiences on day 2 of #minidebconfpune(the final day of MiniDebconf 2011)

Hi all,
You can find the previous days posts here and here.

There are couple of things which I left out or papered over in the Day 1 chronicle.

a. Distributed dhcpd :- While I gave the impression of a big monolithic dhcp service run by an ISP, the reality is and would be a bit more complicated. Without going into much detail I would simply say ISP’s would follow a sort of distributed DHCP service which would again split the numbers (internally) so one or more servers would give a dynamic IP to each incoming connection would get a unique IP number for some time period . This distribution of IP address ranges within the domain (apart from reverse-path) is what helps ISP’s, cyber-crime and anti-terror folks figure out stuff at the very least for petty crime. We need stronger and fairer Information Technology Act but that again would be story for another day.

The other thing I left out was the last half to one hour was spent hearing what newbies wanted to learn/know about next day (day 2). We gave a number of topics right from shell commands, C programming topics and around 5-6 topics (QT studio among them) for the last day. Quite a few number of students/people wanted us to share some of the shell commands with some showing interest in C programming (gcc,gdb) and some showing interest in GUI programming (QT designer/QT studio) .

The last day turned out to be a buzz of things. Behind the scenes we had been discussing when to have a GNOME 3 release party or/and discuss about GNOME 3 lateness in Debian sid/unstable. As GNOME is the default desktop on Debian and GNOME 3 being the shiny new thing and its taking so much time I wanted to share about the process. Also I had some schwag from GNOME Foundation I had been wanting to use the opportunity to distribute the balloons, the stickers and buttons I had received from GNOME Foundation to distribute as ‘GNOME 3 release party’.

There were multiple ways I could have shared it but sadly I was in a fix/bind.

While GNOME 3 was officially released on April 6, 2011 the release is/was horribly broken.(I would expand on this a bit later) GNOME 3 release in Debian is still ways off and broken atm with some sort of QA happening as some items are reaching SID/unstable and testing.

Another way I could have played with it was to use a pre-release of Fedora 16 with crossed fingers but that probably would have been not right in many ways.The GNOME 3 release in Fedora 15 had quite a few people in pain/turned off and the blogosphere was littered with quite a few tales about non-functional desktops and things that I didn’t want to go into.

One of the other options I had , was to just scramble few of the screenshots and videos from either the GNOME project or/and various interesting people who had found or were using some of the cool stuff.

This I also dumped as perhaps it would have felt quite unnatural and its much nicer and truer if you can give a demo of the new release.

After much internal deliberation (within self) and some discussion with Praveen it was decided that I would talk about transitions in general and give some ideas about the big GNOME 3 release transition happening (stalling atm) in Debian Sid/testing which would finally go to Debian stable.

One of the first big mistakes I did on the third day was not to use the projector. For quite a long time, I have had an aversion to laptops (especially to touchpads and small pesky keyboards) and it does take me time to punch out stuff on a lappy which I could do in say quarter of time on a normal-sized keyboard.

Hence used the whiteboard in the class-room. One of the big predicaments which were in front of me when I asked people if they had a ubuntu release transition (meaning going from 10.10 to 11.04) apart from the volunteers and speakers (which for obvious reasons are not to be counted), only couple of hands went up so it was apparent to me/us that this would take some time for them to get how a GNU/Linux distribution is superior in many ways than say a Windows jumping on release. While everybody knows/knew the Windows install and the different product releases, they were unaware about GNU/Linux release process. I quickly went over a few points and slowly introduced the Debian release process. Over the last two days we had explained numerous times the need and the advantage of having a shared library. Now was the time to show one of the dark sides of a shared library which is when transition happens and a new upstream release version of a package also needs to integrate changes happening from new version of some $shared library. So we tried explaining a few times how a typical transition works (how new upstream releases of packages along with changes from shared libraries are part of a transition). I also gave them the link to release.debian.org/transitions in order for them to go, figure out and see some of the transitions happening and what number of packages are remaining for a transition to complete.

In retrospect I should have taken help of a volunteer to surf and show the site on the big projector which would have helped visualized the changes which happen.

On a side note, It really is commendable to the people who have made the web front-end easier for people to figure out stuff. (although it still could do with a little more polishing, but still its better than nothing.)

Anyways, Praveen made the strong point of how the different suites react and come into play. While all the work done in experimental and unstable is done by hand, whatever lands up in unstable would get into testing based on priority and sometimes may have a deferred tag to $some package. The last two days we had been also explaining number of times what the different suites did and how they functioned.

While people had some idea of the above, I shared about the ‘GNOME 3 status in Debian’ page that needs to be googled which is the first or second link which comes up. http://www.0d.be/debian/debian-gnome-3.0-status.html.

Again, this beautiful service is maintained by fpeters . The only hitch with the service is some times the packager may make two versions of a package (2.30-2.32 and 3.0) which can parallelly installable in a suite. Sometimes the 3.0 release would not have any history or needs to be installed as a new package in which case fpeters has to manually add it to the table which does lead to quite a bit of latency apart from that its good.

Anyways,then I shared that most of the heavy lifting of taking stuff from experimental and putting it in unstable was done via mix of things done on IRC #debian-gnome on IRC.oftc.net and the mailing list pkg-gnome-maintainers@lists.alioth.debian.org

A lot of heavy lifting (using both times metaphorically) is done by Michael Biebl (he really does try to do a lot) and some efforts by Laurant Bigonville and Josellin Moutte. At the end the only thing we stressed is we need more active hands helping in gnome-debian packaging and transitioning.

Note:- If one looks at the upstream then GNOME 3.2 just got released about 4-5 days back and if you look at the upstream gnome mailing lists then there are close to 30-40 mailing lists, so having more hands which can do the packaging and more importantly maintaining work are more than welcome.I am sure the gnome-pkg-team would also welcome guys who look out on those 30-40 mailing lists and find/discover any interesting patches that may be lying in the upstream project Bugzilla.

I also shared about the various sub-transitions happening currently for the big GNOME transition to be completed. From the libnotify (notification-daemon) transition currently going on to libevolution (Evolution),libpanel (Gnome Panel),libnautilus (Nautilus 3), libnm (Network-manager) and various bits and pieces that need to be covered for the GNOME 3 transition to be complete.

One of the interesting questions asked by Praveen is to share about GNOME Foundation and maybe give some sort of overview vis-a-vis the Mozilla Foundation.Here again I papered over the answer otherwise it would have taken quite a bit of time and would have gone (a bouncer) over lots of people heads.

The real answer is, While I am an outsider to both the organizations and do not really feel the need to engage (more) in either of them apart from schwag and doing some events off and on, the roles and responsibilities of both the foundations while similar (Note:- there is a strong possibility that the Mozilla Foundation probably saw lot of the processes in GNOME Foundation while setting/coming up with Mozilla Foundation) as far as engagement with either the general populace or even with the FOSS community is concerned, Mozilla has had more successes while the GNOME Foundation still needs to a bit more open and also should be seen as to act fair. It is a process issue.

A part of the puzzle might also be that while Mozilla is in more the user’s eye (bringing the new flashing bling thing most of the time) it has certainly more mind share than GNOME which has had a sort of staid old-grandmother kinda feeling. Also GNOME is more corporate oriented in many a way. Lastly, apart from the small group of people who are passionate about gtk3 and QT toolkits and such, most of the people (general users) just want to do stuff and have the rest of the UI just get out of the way.

Anyways, after the GNOME talk while the packaging groups went to their own rooms, we asked the newbie crowd to spend a little more time with us in the hall itself. We asked them few questions from yesterday and tried to figure out if some or any of the concepts were not clear and how clearer we could make (taking analogies and stuff). While we were doing this, volunteers took quite a bit of time to set up VNC in a client/server mode. The idea of having a VNC session was Pavi’s idea which he would use later in the day.

VNC stands for Virtual Network Client. While it may sound to be similar to virtualization softwares like virtual box or/and VMware in reality they are different.

While Virtualbox or/and VMware attempt/do emulate other OS/version inside another OS (say having MS-Windows run on Debian or vice-versa and all kinds of permutations and combinations) while having a thin layer which separate the guest OS and the host OS while making sure that networking, memory and the processor is shared.

Given a proper machine you could also emulate/simulate different types of architectures and stuff. Its pretty useful for people who want to try out things in a sort of controlled environment but that’s a story for another day.)

The VNC concept turns things a bit on its head. VNC in many ways is similar to the well-known ‘Desktop sharing’ concept. Desktop sharing simply means you give access to a person to either see how he does something or (s)he takes control of your computer/desktop to do some trouble-shooting. This is and could be used to do remote-troubleshooting provided the bandwidth for both the parties is there and the issue is not of the network link.

VNC extends the concept quite a bit. In it the server serves the server’s desktop and all keyboard and mouse presses to clients. In a classroom environment it would be a teacher/helper/guide showing his/her desktop to all the students and people can see what (s)he is doing at that time. VNC (server-side) needs a lot of RAM to work. In our case we used Abdul’s laptop which had an ICore5 processor and 4 GB memory and there was still quite a bit of lag. I was not there so don’t really know if he killed some of the background processes and reniced VNC server or not. Probably things were at a default.

On the client-side you need the client and an IP Address to connect to. There was quite a bit of network lag even though these were all Ethernet links (didn’t really see/investigate why that lag was there) and from what little I know Ethernet links should be more than good enough.

Anyways as told before, the actual use of VNC was done way later. When things were done, the first thing we asked the students to explore the interface and the Debian Menu. After that was done we asked them to change the sources from stable to unstable and do a dist-upgrade. We showed them the terminal and it took them sometime to figure out how to use gedit to edit the /etc/apt/sources.list. Some people had mangled up or did not know that the root user and user passwords were and we had to fix those. After fixing and having superuser access we updated the index and asked them to dist-upgrade.

We did a couple of mistakes here. We should have actually showed how the index gets generated and how the time-stamps gets changed and things, we didn’t do that (just for the index).

Then if I were doing a dist-upgrade on my system I would go to a virtual terminal and then do the dist-upgrade bit rather than doing from gnome-terminal. We did it there in the gnome-terminal.One of the good things we did though was to take 2-3 applications and give the version numbers before the switch to sid/unstable and write the same on the whiteboard/blackboard.

As we had a local repository of unstable (32/64 bit) we also had pointed out the local repository address (it was something in the 10.10.x.x range) so while getting updates was not the big deal (the only downsides being the local Ethernet links and how fast or slow your hard drives can read and write) the whole process took around an hour and a little bit more. In the meantime, we had declared lunch.

So after lunchtime we came back, we checked the version numbers and lo-behold as if by magic the users saw the version change. For e.g. gnome-terminal making the push from gnome 2.30.2-1 to gnome-terminal 3.0 and couple of software versions like that. Some packages were broken due to perhaps it being done in gnome-terminal but did not want to take the time to investigate the issue.

After this session, there was another revelatory session given by Miheer where he shared about gcc and gdb.He shared about some details as to why we need a compiler and the difference between higher level languages (like C) and lower-level assembly language which the machine can understand. He also tried to show how the hardware functions and the buses and how a bunch of instructions move into the stack and bit about registers and stuff.

Couple of things which I thought he could improve upon is still deconstructing the things a bit more. Apart from only couple of electronics students most of the people were blank. I felt he should have delved a bit more into Processor registers rather than just running away with it.He also did share about modern processor caches after a bit of prodding from me.

The other thing he kinda did not look into is the standardization of C standard (c-89 and c-99). The standardization process of any software engineering product/process is fraught with complications. With so many vendors (an incomplete list can be found on Wikipedia page list of Compilers.) along with number of processor architectures it becomes game/dance between negotiation of feature-set,multiple competing implementations of some feature-set and accommodating various interests. It can become extremely bureaucratic as the failed Java standards or extremely chaotic HTML 5 standardization (which will take at least couple of more years to finally reach the final recommendation status).

Anyways by this time I had started becoming a bit tired and hence started to zone out and went to have a cup of chai or something. In the interim Miheer introduced and shared a bit about gdb. I had been wanting to know a bit about gdb as I have very rudimentary knowledge about gdb and debugging is very much part of software development process.

Anyways, after that session the onus was on us again and this time we decided to take up shell commands (as it was a popular request) and we ran through 10-15 commands explaining the command, what it does, showing how to use manpages (all of which we had to discover in our time) . If one wants to be a bit indulgent one can for e.g. read quite a few of the GNU utilities info pages and see RMS’es hand in quite a few of them. I was tempted to take that up but decided to pass it as it would have taken the discussion to another level and frankly I did not want to lead it at that point in time.

So, me, Pavi and Sana did some 15-20 basic commands from introducing the ^ character to ls,pwd,cat,grep,pipe (|), nano etc. After we were tired of doing that a bit, we asked some of the volunteers to come up with a command which they knew which we hadn’t covered. For me there were two commands which I hadn’t personally known.

Pavi shared about how one could use cat to add text to a file. While I had just been using cat to show data/txt to stdout it was good to know that one could use the same command to add info./txt to a file. This is damn convenient if I wanted to just make a one/two-line text file in a jiffy. An e.g. :-


$ cat > catexample.txt
this is how you add text,info to a file using cat
^C

and then outputting it :-

$ cat example.txt
this is how you add text,info to a file using cat

The other command which I had no idea about was the command expr. While I did read the man page eventually and can sense where it would be used, some example shell scripts would be nice to really nice to know what can be done with it.

A side note:- Do people think that the authors of manpages and info pages are sadists and the general users (us) masochists. There is seriously a need to overhaul the language used in quite a few of the manpages. I have seen so many man and info pages which do not have example usage or assume the person reading is a computer science geek that it makes the mind whirl.

During the closure of this session, I did try to also share with the students about Internet RFC’s (Request for Comments) as that’s a pretty good way to figure out how things work in the Internet culture. While I did delve a bit into the history of Internet during the netmask bit as well I also sadly omissioned quite a bit of work done by IETF using consensus and sharing of info using the Request for Comments process.

Just before ending the session, we also had a brief Q&A where the students/people could ask us questions and we could answer them. There were an interesting question about multi-processor/parallel programming which I again just papered over.

The real deal would have been to explain using an application like BOINC .

Disclaimer :- I have been a moderate participant in the BOINC project and do contribute what little computing I can do to some of the sub-projects in World Community Grid project (WCG)

Anyways, the BOINC project is a huge distributed network computing project which comprises of the BOINC server and the boinc-client and boinc-manager packages.

Now the BOINC server would be needed either if you are planning to put up a project which requires massive computing power (climate prediction, protein folding, earthquake prediction etc just to give few examples.) The other time you would need the server is when you have some sort of RAID infrastructure and want to contribute by being a mirror to one of the sub-projects so all the crunched data is there. It would mostly be done either in server farms or in some data-center or in ISP’s backyard so we will not discuss it.

The real deal here would be the boinc-client,boinc-manager and the cc_config.xml file. While I would not really go into the implementation details, what happens in most projects is you get a chunk of raw data which is cut into manageable pieces by the boinc-client and processed sequentially or parallelly depending upon the number of processors, the amount of memory and existing load on the machine. Simplistically one could just say that each piece starts its own thread. Something like :-

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14948 boinc 39 19 216m 118m 644 S 62 5.9 125:38.98 wcg_faah_autodock faah24030_ZINC04998709_xFIV6s98S_DRV_00.dpf

Now the above is the output from top for the single process I am interested in.

I have a dual-core processor and use one core for distributed computing and the rest for all my other stuff. While I just run a single thread, if I had a quad-core or eight cores whatever I could run multiple threads simultaneously. Now the problems would be making sure that once an application takes an address space then another one should not violate it/come to it as well as prioritizing stuff. This would have led to talking about the scheduler and while I have a sorta broad idea about the scheduler not really anything to write home about. It would have been better if a kernel hacker was around.

After this session, we had the Closing Ceremony where we made sure everybody spoke about their experiences and what we could do to make it better for next time. Right from the audience,volunteers and speakers all had a chance to speak and share. One of the bright spots was getting to know that Muneeb’s work on the hyphenation package had been accepted into the debian repository.

After that ended, some people still had loose ends to tie. For e.g. Pavi had wanted to share about QT studio (this is where the VNC Viewer came in). As I had already been zoning out for sometime and didn’t really want to see QT Studio at that point in time, I was just too tired so can’t really comment. Praveen and Shravan were also closing up some of the packaging sessions they had been taking.

So that marks the end of the 3-day Debian journey. I might put up another post sharing some of the shortcomings and the winnings that could make Mangalore miniDebconf get it much better as well.

One can find photos of the third day minidebconf and the closing ceremony here and here.

About these ads

Single Post Navigation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: