Experiences in the community

Just another WordPress.com weblog

GNU/Linux primer 2

Hi all,

This is a follow-up of the original primer based on the various questions asked by various individuals either through e-mail or through the comment system in the blog or otherwise. The rest of the blog-post would be attempting to give answers on the same.

Q. A GNU/Linux system is confusing

A. While by confusing it may have many/multiple meanings I am assuming one is inundated by the huge number of choices one faces. In that context :-

A GNU/Linux system is designed to be modular in nature giving as many choices and forks as possible. While making those choices may be overwhelming , I would argue that those those choices need to be there for one of the simplest reasons of all.

Many of these components/applications have small yet highly-talented teams who are delivering whatever solution they are doing. Now, which teams would be able to maintain a certain team commitment, be not outdated and be competitive is not known. This is the same for proprietary as well as free software.

For e.g. right now, there is a huge transition happening on the KDE front i.e. KDE is making efforts at giving users and developers a much better nicer interface but its in a flux-mode which means its not stable.

Whereas GNOME is still travelling a non-breakage path for some duration.

Please be aware both of these are well-funded and run on quite a few free operating systems.

Q. why GNU/Linux implementation is so costly?

Q. Why is it not able to not cross 0.5 % of the desktop share market?

Q. They charge for everything, including training

Q. Finding GNU/Linux resources is difficult and costly.

This is all relative to the Windows experience.

A. Well, there are numerous questions to handle at one go and while I have not all the answers perhaps I will try to answer them one by one.

Here’s one partial answer to the question given above.

Osric Fernandes wites:-

if software only has a one-time development cost, then why do should we charge for each and every copy? If you want to have a business model around FOSS as a developer, you can charge a one-time, but high fee for developing customized software. You can also keep your development cost low, by reusing components from other FOSS projects rather than developing from scratch. You can also form a business model around supporting / maintaining software.

While this is a good answer as any but it doesn’t answer the whole question.

a. The first I would really like to attack is the argument that GNU/Linux hasn’t hit past the 0.5% , according to Gartner it has risen to 4% . The other thing that is not taken into account is that almost all the GNU/Linux desktop distros. have a policy of having an opt-in policy which basically asks when you are installing that do you want to push the stats to the server or not. In fact you will see many of the distributions being very open about what they do.

b. One of the simplest reason for this is that the distributions wants users to be comfortable. If they feel they can send everything from which version, what hardware, which packages and all these sorts of information. While much of this information may be benign, people may or may not do not want to do the same.

c. On a Windows machine whether one likes it or not, doesn’t matter if one is an authorized or a non-authorized user the license key is able to track how many desktops and all sorts of information is  sent involuntarily the first time one connects to the Internet.

(Please be aware I am talking about explicitly distributions like Fedora, OpenSuse, Mandriva, Ubuntu and alike . Not the Enterprise ones)

d. One other reason is that the Windows ecosystem is a very IP-friendly ecosystem , where one is able to have NDA’s (Non-Disclosure Agreements) and everybody is happy.

Almost all of the users do not know that the price of the product also includes the cost of the drivers as well and the drivers too have a license and limitations therein.

Its not as if GNU/Linux does not have binary blobs, there are but they are generally frowned upon. Most of the stuff as far as drivers are concerned are reverse-engineered most of the times although the tide is changing as the recent Atheros code submission as well as the Radeon code drop although lots remain to be done .

e.  Another reason is that GNU/Linux development is pretty rapid. You have a new kernel every few months or so while in Windows there would be a big update once every 3-4 years.

f. Another reason is that GNU/Linux supports many more architectures than MS-Windows does .

g. Another thing is support for a commercial release in MS-Windows for a specific version/release is typically between 5-7 years while that in GNU/Linux can go anywhere from 7-9 years in the Enterprise stack while on the desktop it is anywhere between 1.5 years to 3 years (on GNU/Linux) while MS-Windows doesn’t really have any.

h. The rapid pace of development actually means that teaching material has to be constantly updated whether its programming, networking whatever whereas Windows seeks to do dramatic changes once in 4 years and then uses the next 4 years to make people understand what they have done.

i. Now in the public-domain if we take the Indian context, we have NCERT which takes a view on syllabus and stuff. How do we contend to have books which would be deprecated in a year or slighly more as technology is moving so fast. While on the Windows platform it is more or less stagnant.

j. If you look at the classifieds in newspapers, you will find cost of training perhaps 10-15% higher and the options a little limited but that is supply and demand economics which will turn with time.

k. The QA and testing done by people in the free software community is pretty small. These are normally done by people who love a certain distribution quite a bit, or some are paid by their employers to see if the latest version of a certain distro. does not break compatibility with their device. The latter although is rare . This may push up the cost a bit during implementation.

Updates :-

a. An errata from Redhat shows that the Enterprise versions go for 7 years or more as well.

b. Kernel releases roughly every 3 months

That is all from my side for the moment.

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Furl

Single Post Navigation

4 thoughts on “GNU/Linux primer 2

  1. @ arky thanx

    @ senthil Thank you for pointing out stuff. It seems that while trying to make sure I don’t overshoot or becoming boring or something I missed out some very important points. How does it sound if I take those questions as a basis for a GNU/Linux primer #3

    If you have any more questions, please free to throw them as well. I would be looking forward to any more questions you guys may have.

  2. is gnu linux secure – if everything is open is another question asked

  3. linux and desktop market : what about the control on the market – what about lack of pre installed hw – what about govts not choosing whats cheaper and more meaninful for community as reasons

  4. Good work! Attaboy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: