Tuesday, December 30, 2008

Apple Safari 4.0 Developer Preview Has Problems With The Acid2 Test

It seems that Apple Safari 4.0 Developer Preview (526.12.2) has trouble with the Acid2 test.

In a restored down window, it passes the test

However, if the window is maximized, Safari corrupts the correctly-rendered image

Safari also incorrectly renders the Acid2 test, if horizontal length of the window is made small

The extent of corruption increases as the horizontal length is decreased further

Finally, Apple engineers have still not solved the annoying white line bug in Safari - it looks so crude and unprofessional

However, despite all this, it's reassuring to see that this build of Safari shines in the Acid 3 test


See My Photo Albums On Picasa

PREDICTION: Google Android Operating System Will Steal Market Share From Microsoft Windows

Google's Android operating system, the way I see it, holds more potential than Apple Mac OS, in its ability to supplan Microsoft Windows as the mainstream OS.

Although Android has started from smartphones, I believe that it will soon start shipping on netbooks, then nettops, and finally on mainstream desktops and laptops/notebooks.

Multiple factors will be responsible for this
  1. The pressure to reduce prices of devices (Android is free and open source) will attract hardware makers (smartphones, netbooks, nettops, desktops, laptops, etc.) to Android. Also, the alleged discomfort that hardware makers have with Microsoft's licensing terms will be another factor pulling device makers to Android. Multiple device makers will flock to Android, and the market will see a huge surge in the number of devices of various shapes, sizes and prices - all powered by Android
  2. The capabilities and quality of Android as an operating system: Android, the way I see it, is a capable and complete operating system, and not a mobile-optimized and stripped-down version of a desktop operating system. There is nothing that stops the use of Android on mainstream desktops and laptops. And being open source software, bugs will be discovered and removed quickly - like it has been with Firefox and Linux
  3. The discomfort-with and hatred-for Microsoft and Windows: I believe that one of the chief reasons for the success of Firefox has been the devotion with which fans of Mozilla and Firefox have made efforts to develop, evolve, improve, promote and use it. The hatred for Microsoft, combined with the love for Mozilla caused millions of Microsoft users to shift to Firefox, and these converts also converted many of their friends and family members to Android. This same set of users will be among the first ones to install and try out Android, should it be available for the desktop. This set of users - probably millions in number - will be more than happy to use a Linux-kernel based operating system over Windows. This same set of users will develop, evolve, improve and ultimately spread Android to others
  4. Android is backed by Google. And many more heavyweights: Unlike Firefox, which is backed by the not-so-rich Mozilla, Android has the formidable backing of Google. Google has the financial and industry position to invest heavily in development, partnerships, and massive promotions, and unlike Microsoft, Google isn't hated by the community at large, at least apparently. It seems that developers are not only comfortable with Google, they are actually happy with it. Doubtlessly, most users are, as well. Add to this the support of the other members of OHA, and it's easy to see that the young Android already has lots of support
  5. Android-powered devices stay close to our heart: With iPod, Apple brought our whole collection of music to our pocket. We didn't need to have the collection on our Windows powered machine, thus reducing the need/role of Windows in one way. Android-powered smartphones will reduce the need/role of Windows-powered machines in multiple ways - we will be carrying our entire music, photos and videos collection right inside our Android-powered smartphone. Large-sized display will make Web browsing and online video more entertaining/productive/useful, and will also make watching locally stored photos and videos more fun. There will be lesser and lesser need to boot the Windows machine. Click a photo using the phone, and post it on Orkut or Facebook from within the phone - no need to transfer it to the laptop
  6. Location-specific information and social-networking: Wireless network, Wi-Fi and GPS, all allow a user's location to be discovered, and based on that tailored search results (among other things) can be provided to the user - not usually available on a desktop. Also, location enables creative and engaging new ways of social networking. Local and social will thus be yet more reasons why Android-powered devices will be more useful to the user than his Windows-powered desktop/laptop
  7. Application and data portability resulting from Cloud/SaaS applications: Google will push its Cloud-based services such as Gmail, Google Calendar, Google Docs, Picasa Web Albums, Google Maps, Blogger, etc., over native desktop applications in Android (user data resides on the servers, and is accessible from any other computer or smartphone). So while users temporarily switching from Android to Windows will be able to access their applications (Gmail, Blogger, etc.) and hence data on their Windows-powered machines, the reverse will frequently not be true (Microsoft is notorious for making the experience of its products poor on rival platforms - it's hard to imagine that an Outlook user will have seamless access to his Outlook data on an Android device)
  8. The sheer weight of Windows: While Android is fast even on a mobile device, Windows is often complained to be slow even on desktop machines. This sheer weight associated with Windows makes it less practical to run it on mobile devices, without heavy modifications. Because Google has started from a mobile device, it's easy for Google to upscale Android for a desktop machine. Microsoft, on the other hand, will have a tough time stripping Windows to enable it run with a decent speed on mobile devices. The dramatic difference between Windows Mobile and Windows Vista/7 create an inconsistent user experience, a situation Android should not suffer
  9. The halo effect: An interesting fact is that while on the desktop, users resist using an unfamiliar operating system, on mobile devices they happily use whatever proprietary system is given to them (Symbian, BlackBerry and the various flavors of proprietary OSes in low- or mid-end Nokia, Samsung and Sony Ericsson phones). This fact will ensure that users will adopt Android OS on their phones as easily as they adopt other mobile OSes. Over time, these users will get familiar with the OS (its applications, UI, etc.), and when presented with a desktop computer powered by Android OS, these users will have no difficulty in using it or being productive on it. In fact, users who like Android may actually ask for it - Dell, HP, Toshiba, Acer, Sony and Lenovo should not make the grave mistake of ignoring Android as merely a smartphone OS
  10. Android is cool to use and cool to Flaunt. And Google is cool too: One of the key reasons why people buy a Mac, iPod or iPhone is the cool to use and makes me look cool among my friends factor. This image associated with Apple and Google is a far cry from the boring and corporate image associated with Microsoft. I believe that the coolness of Android (applications/features and UI), the coolness of owning a Google-powered device, and the coolness of telling friends that my phone runs Android, will be a non-insignificant factor in the adoption of Android
  11. The image of Windows is tarnished: If the unfriendly image of Microsoft was not enough, the image of Windows is apparently quite poor. Windows is considered an operating system that is slow, and is often plagued by viruses and other malware. Macs, on the other hand, are perceived to be fast, secure and free from malware. If Google succeeds in keeping Android free from malware and privacy/security issues, people who are fed up with Windows will want to move to Android (just like these people want to move to Macs)
  12. Android can be customized and even forked: Many device makers and wireless carriers like to customize the core OS according to their specific desires or needs. The open source nature of Android allows device makers to customize the OS. Device makers who do not get this flexibility with Windows Mobile or Symbian may choose Android instead. What's more, Android can even be forked to create an entirely customized operating system
  13. Android will improve quickly and dramatically: We should not forget that Android is very new. And whatever shortcomings it does have, they will probably be eliminated soon. It is easy to see that 2 years down the road, Android will be brimming with immensely useful applications and features

Monday, December 22, 2008

An Idea For A "Variable Power" Car Engine That Saves Fuel

On the lines of my post - An Idea For A Shutterless Digital Camera - here I write a desire and an idea for liquid fuel powered car engines that are "variable", in the sense that their power output can be decreased by turning off some sections, to save fuel.


In a 1.3 liter, 4-cylinder petrol car like my Maruti Suzuki Swift, is it possible to "turn off" 1 or more cylinders (temporarily), so that although the power output reduces, fuel consumption decreases too?

I got this thought when driving from Noida towards Pitampura (Delhi). Concerned about fuel consumption, I thought "My car has a 4-cylinder 1,300 cc engine. So each cylinder's capacity is 325 cc. And each cylinder accounts for exactly 25% of the total fuel consumption. Right now I'm driving alone, and the air-conditioner is off. I really don't need all the power generated by the engine. It would be great if I could just turn off a cylinder, so that my engine temporarily reduces to a 975 cc engine, thus consuming 75% fuel - saving me a significant 25% fuel"

I'm not sure whether it's possible to achieve this with the current design of engines, but I'm sure it's possible to build an engine which has this ability. An internal computer could then ensure that different cylinders are turned off each time, so that wear-and-tear is evenly distributed across all cylinders.

I'm also sure that there will be many effects of turning off a cylinder - like the other cylinders having to run at higher RPM to produce same amount of power, etc. But all that is the work of an engine engineer!

About Me - My Google Profile

Saturday, December 20, 2008

An Idea For A Shutterless Digital Camera

I remember, when I was in class 9 or 10, I read somewhere that children are the most free thinkers (and innovators), because they don't have any knowledge of financial / physical constraints, etc., and so their thinking wanders freely. In contrast, the thinking of experts, professionals and researchers is (at least sometimes and at most always) handcuffed by their knowledge of "costs", "feasibility", "practicality", etc.

It's more likely that the solution produced by an expert or a professional / researcher will work, but it's also likely that this class of people will not be able to come up with "fresh", "out-of-the-box" and "revolutionary" ideas (due to the reason mentioned already).

With this context in mind, I will write an idea - a childish one - that has the potential to make shutterless digital cameras possible.

Why a shutterless digital camera? I like digital devices with the least number of moving parts. I like it when functionality is delivered without having any moving part (so I love flash memory based SSDs over magnetic HDDs). I feel that gadgetry with moving parts is unreliable (especially hard disks). And a digital camera with no moving parts would certainly be great - long-life, shock-proof, and longer battery life.

The idea - can varying the time for which we "pick" the signal from a digital camera's sensor be used to emulate the effect of shutter speed? More specifically, the idea is to turn on the sensor of the camera for such an amount of time, so that the amount of signal collected from the sensor is equal to the amount of signal we get by keeping the shutter open for a specified amount of time.

In a nutshell, the idea is that - the sensor is always exposed to light. However, it's to be turned on only when a picture is to be taken. And it's to be turned on for a small amount of time, and a continuous signal is to be collected for that time. The continuous signal could be broken down into discrete signals (say every 1/5000 second), so that RGB values represented by each discrete signal are added to progressively build the image. The longer the sensor is turned on, the more exposed the image would be (as each discrete signal would be added).


2 things to note:-
  1. The mapping between conventional shutter speeds and the time for which we capture signal using this idea may not be (will most likely not be) direct. For instance, it's possible that to get the kind of photo we get using 1/60 second as shutter speed, it's required to sample the signal for only 1/25 second
  2. It's possible that this entire idea is fundamentally flawed (i.e. it's based on certain assumptions made subconsciously - i.e. without explicit knowledge - which are incorrect). But in that case, it's possible that a new kind of sensor (possibly using a new kind of technology - for example using photoelectric effect) can be constructed for which this idea works
Update (22-12-08): I read about the concept of Design for Manufacture (DFM) while traveling from Delhi to Ludhiana. The concept, essentially restrains a designer's thinking to make sure that what he designs is manufacturable. These restrains are the kind of things that prevent free thinking, and children, unaware of these restrains, innovate and think freely! Read about DFM here, here, here and here.

About Me - Flickr Profile

My First Award - At My First Job & On My First Project :)

Yesterday - December 19, 2008 - was a special day for me. We had finished our project just a week back (we were working on it since August 2008, i.e. it took ~4.5 months), and informal feedback from the client came on December 18, 2008 (formal feedback is still to come).

The feedback was not just positive, it was mind-blowing! The client was very pleased with our work, and especially applauded the practices and processes used by us, as well as the quality of our deliverable. And my company's top brass was extremely happy as well.

So, each of the 5 team members was awarded an Outstanding Client Service Award in the company-wide meeting yesterday (the core team consisted of 5 people, and 2 more people were conducting higher level works). It felt quite nice to receive the award. All the hard work that the team had put in paid off nicely.

At the personal level, I feel that all the diligence and hard work I had put in paid for me. Those late nights when I would be busy consuming, digesting and marking reports from Forrester, Gartner and IDC (and sometimes Burton, Jupiter, Ovum and Yankee too); those train journeys from and to home, when I would be studying research papers and white papers - all that has paid off in the form of a satisfied client, a happy team, elated company officials, and personal satisfaction (of course, this is the result of a combined team effort).

However, after this initial success, I feel more responsibility now. This success is past now, and looking ahead, the bar of expectations is higher. The next project is visibly tougher (it's the same client - pleased with our work, they've given us a significantly more challenging project this time). I hope I am able to deliver even better this time.

Friday, December 19, 2008

Secretely Devilish - Could Google Be Promoting Its Properties By Playing With Search Ranking?

Consider 2 webpages, both deployed on the same website (say http://www.rishabhsingla.com/ ), having similar file names (index1.html and index2.html).

The content of index1.html is as follows

The Web of today gives us for free, much of the content that people used to pay for sometime back. While on one hand we have online videos to watch, we can read regularly updated news, and even have encyclopedic content to consume - all this for free! Even the portals providing a regularly updated view of this content are free.

The Web doesn't provide us with just free "content". It also gives us free services and tools. Users can engage into social networking, send and receive email, enjoy chatting with their buddies - by text or even by voice or video, search for images or photos, look for interesting blog posts, write blogs of their own, and can even use office productivity applications - once again, all this for free!

If all these free goodies were not enough, even the applications used to access the Web are available for free. We have secure and capable Web browsers, and feature-rich toolbars which add useful functionality to these browsers.

Clearly, the Web saves people a lot of money!

The content of index2.html is as follows (not everything is same)

The Web of today gives us for free, much of the content that people used to pay for sometime back. While on one hand we have online videos to watch, we can read regularly updated news, and even have encyclopedic content to consume - all this for free! Even the portals providing a regularly updated view of this content are free.

The Web doesn't provide us with just free "content". It also gives us free services and tools. Users can engage into social networking, send and receive email, enjoy chatting with their buddies - by text or even by voice or video, search for images or photos, look for interesting blog posts, write blogs of their own, and can even use office productivity applications - once again, all this for free!

If all these free goodies were not enough, even the applications used to access the Web are available for free. We have secure and capable Web browsers, and feature-rich toolbars which add useful functionality to these browsers.

Clearly, the Web saves people a lot of money!

You would've noticed by now that everything is same for these 2 webpages, apart from the properties to which they point. index1.html points to various Google properties, while index2.html points to non-Google properties.

Google's PageRank, as much information about it is publicly known, doesn't factor outbound links to calculate the rank of a webpage. But that's publicly available information!

The question I ask is - will the rank of index1.html and index2.html be exactly the same (considering everything about them, except their file names and outbound links, is identical)? I have my share of doubts. It's in Google's interest to promote webpages which point to Google properties, so that visitors to those webpages have a higher chance of coming the Google properties which have been pointed to. And currently there is no way to ensure that Google is not engaging into malpractices. The underlying reason behind my doubt is that Google is both a gateway to the Web (both Google and non-Google properties), as well as a provider of some of what constitutes the Web. It helps Google if it can promote third-party pages pointing to Google properties, without letting anyone feel this.

This post echoes another concern of mine - Google promoting webpages with AdSense deployed on them, over those which either have no contextual advertising system deployed, or have a system deployed from one of Google's rivals. Read about my concern here

P.S. I composed the short essay used as the content of index1.html and index2.html

About Me

Thursday, December 18, 2008

PREDICTION: Google's Native Client Technology Is A Game Changer

No matter what benefits accrue from the use of a Web browser for running Web applications (platform independence, sandboxed execution, etc.), fact remains that a Web browser is yet another abstraction layer, and adds to the execution inefficiency. The Web code runs one layer higher, wasting precious CPU cycles, and consuming more memory. The speed lag may be less pronounced on powerful modern desktop systems (in part because Web applications have so far been put to relatively "light" uses), but imagine running the full version of Gmail or Google Docs on even a relatively powerful mobile device such as the iPhone (inside Safari browser) at an acceptable speed - it's laughable...

And so since many months, I was wondering - why doesn't Adobe / Google / Mozilla or a startup develop a product (either a complete platform by itself or a browser plugin) that allows "sandboxed execution of native code".

I'm delighted to see Native Client, Google's project that does just the same thing - sandboxed execution of x86 application code inside a Web browser. What's more, the characteristics we typically expect from browser-based applications - browser-neutrality, OS-independence and security - are preserved. Bravo Google! I see this project (and also JavaScript engines such as V8, that compile JS code to native-code) as game changers (albeit I believe it will take at least a year before Native Client gets a decent amount of traction, and at least 2-3 years before it starts getting widespread adoption).

My personal feeling is that this is a nail in the coffin of desktop applications the way we've traditionally known them. This makes it possible to securely run the full Photoshop inside a browser. And this is the technology (or a derivative thereof) that will eventually subvert / supercede the myriad of technologies fighting for domination as the choice for applications served from the Web (Web browsers, Adobe Flash, Microsoft Silverlight, Adobe Integrated Runtime, Sun Microsystems JRE/JavaFX, Mozilla Prism, etc.). To be fair, it's not the first time that native code is being run in a browser - that credit deservedly goes to Microsoft, whose ActiveX has long had this ability, albeit infamously insecurely (there is a reasonable probability that ActiveX can get a second life, if Microsoft evolves the technology to make it more secure). By creating a viable application platform layer at the browser level, Google further undermines the role of the operating system as a software platform, unlocking the long-held hold of Microsoft and Apple on their respective software ecosystems. Over the long term, NaCl poses a serious threat to these now-popular operating systems.


Quake running inside Firefox - over Native Client (from Ars)

Of course, Native Client is still at a fetus stage. It will progress in both evolutionary and revolutionary ways, and my belief that it will be a game changer factors in this expected evolution. Native Client has the essential quality any would-be contender as the dominant software platform must possess - cross-platform and cross-browser support (support on mobile platforms such as Android should follow in some time). Looking at it from an inverted perspective, I see no reason why Native Client shouldn't succeed. Finally, I see Native Client getting bundled with Chrome down the road (the way it has been with Gears).

Chrome (secure architecture + inbuilt search) + Gears (offline) + Native Client (heavy duty functionality + speed) = A fulfilling user experience for various types of applications and content.

I'm wonder why this project didn't get as massive coverage in press as I believe it deserves... Does my post on journalism provide some indirect (*cough*) explanation (read ranting...)

Sunday, December 14, 2008

Human brain's information-retrieval system is imperfect (apparently)

This post is in continuation to my previous post (Human brain could be storing & retrieving information as 'related blocks').

About a year back, I was at home and me and my sister were watching a program on TV. A character appeared on TV, which I felt I've seen before. I started trying to recall his name, but couldn't. My sister knew the name, and after watching me trying to recollect the name for about 5 minutes, she finally spoke out the name, and I exclaimed "Yes! This is his name!".

Immediately I realized something. To be able to confidently say that "Yes, this is the name", I must have compared the name that my sister spoke out to the name that was already residing in my memory. After all, it's impossible for me to claim that the name which my sister spoke is his name, unless I already have a full copy of that name in my brain, to compare with.

This leads me to two things:-
  1. Although my brain's information storage system had successfully stored the name, the information retrieval system was unable to read it
  2. Our brains have much more information stored inside, than we know. The inability of information retrieval system to retrieve all that information doesn't mean that tons of information isn't present
Why didn't the information retrieval system of my brain retrieve the name by itself? Possible reasons:-
  1. Wear and tear over time, leading to partial damage to the information retrieval system
  2. Inherent shortcomings in the system's design
  3. My focus was on some other task, and so the retrieval system wasn't focused on the right block (to better understand this point, read this - Human brain could be storing & retrieving information as 'related blocks'
  4. Excessive amount of information had been stored in the brain, and the retrieval system either found it difficult to retrieve information (pointing to a design flaw in the retrieval system, or its inability to scale), or the retrieval process required more time (there's no flaw in the design of retrieval system, but the time needed to retrieve information is proportional to the amount to information stored)
Will we be able to solve this problem in future? Two obvious approaches may help:-
  1. Improving hardware and algorithms of brain using genetic engineering
  2. Connecting the brain to external equipment to copy information stored in it onto a computer, and retrieving it from there
The intent of this post was to prove that the information retrieval system of human brain has its share of flaws. It will be helpful to go through this post, to get a better idea of my views on information storage inside the brain.

Sunday, November 16, 2008

Inconsistency Between Results Of Google Search And Suggestions Of Google Suggest

When I type apple developer connection into the Google search box, it returns http://developer.apple.com/ as the top result (and I'm Feeling Lucky button takes one to this URL). However, when I type the same query into the address/location bar of Google Chrome, the inbuilt Google Suggest feature shows me http://developer.apple.com/iphone/ as the suggested URL.

The joy of using Google and Chrome gets marred due to this inconsistency. I've been using Google for years now, and I'm used to typing certain queries, only to expect certain results at fixed positions in the SERP that Google returns. I'm also quite used to the Browse By Name feature in Firefox, and expect that if a particular query typed into the location bar of Firefox takes me to a specific URL (because of the Browse By Name feature), then the same query typed into the location bar of Chrome should return the same URL as a suggestion.

This, after all, makes perfect sense!

The screenshots below show the inconsistency:-
I just noticed that all the visible bookmarks in the Bookmarks Toolbar of my install of Chrome point to Google properties! A sign of things to come?

Saturday, October 11, 2008

The Blog Invasion - A Drop In Journalism Quality At CNET News

It's a little silly that I'm going point this out in a blog post. I'm feeling increasingly sick of the unusually large number of blog posts coming up on some of my (till recently) favorite and most-read news websites - such as CNET News.

There was time some years back when I would daily go to CNET News (it was located at http://news.com.com/ back then) and would find a dozen or more fresh and well written news stories, free from immature and misinformed personal opinions and also both enjoyable and insightful. The advent of blogs on the Web, initially by independent individuals on third-party services such as Blogger, and later on News Websites started the trend of what CNET now calls a News Blog. Initially, these News Blogs took up only a small proportion of the total number of stories published by CNET News. Slowly and slowly, however, the proportion of these blogs grew, till a day came when News Blogs finally outnumbered News Stories on the homepage of CNET News. And this, in my opinion, was an unfortunate event, not just for CNET News (and its discerning readers), but for Web-based journalism as a whole (I see similar trend on some other websites such as Wired).

The final blow came when CNET stopped marking these News Blogs with a large and clearly-visible News.Blog banner, and gave all types of stories a unified http://news.cnet.com/ domain (previously all these News.Blog posts had a separate Internet sub-domain). Together, these 2 changes ensure that not only does one not know before clicking on a link pointing to CNET News (Say, from Google News) that it points to a News Blog and not a news story, but worse, one can't always be sure that one is reading a blog post even after having landed on the page. Additionally, news aggregators such as Google News are mistakenly including these News Blogs in the news stories they include, when the correct place for such posts is the newly revamped Google Blog Search (now in Google News format). One of Google's goals is to organize the world's information and make it universally accessible and useful, and an important step in this direction will be to separate indisputable and reliable facts from disputable and unreliable personal opinions. My reasoning for this is that there is a clear line of distinction between News Stories and Blog Posts, as outlined below:-
  1. A News Story: Should present pure and unbiased facts (as they happened), and only pure and unbiased facts
  2. A Blog Post: Should present pure and unbiased facts (if it presents them at all, something not required of a blog post), but can additionally add personal opinions (which sure can be biased, provided reasonable and sufficient attempt is made to ensure that the reader is made aware that he is reading a blog post and not a news story, as well as implications of the same)
The issue I have with CNET News is that it labels and markets itself as a News Website, whereas with News Blogs generally outnumbering News Stories on its homepage, it should ideally be branded as CNET News Blogs, thus reflecting the disproportionate share of blog posts. Readers at large should not be tricked into believing that they're visiting a News Website when in reality they are being given a heavy dose of personal opinions, instead of facts and logical analysis.

Which brings me to the pathetic, and often hilarious News Blogs that many (most?) journalists write. With apparently no real understanding of the underlying business models or technologies, many journalists are dishing out "analysis", "opinions" and hilariously, even "forecasts" and "predictions" about brands, products and segments in the technology sector. Just look at this story and I bet you'll either laugh holding your tummy or get to the verge of crying. The hopelessly pathetic nature of this unusually immature post can be effortlessly judged from the expectedly large number of reader comments it has accrued (which, by the way, are way more correct and enjoyable to read than the story itself - maybe it's Computerworld's secret futuristic 2025 AD strategy of making readers themselves create great content for Computerworld for free, by Computerworld putting up a post full of the material ejected from south end of a cow, thus triggering a surge of corrections and fresh inputs from infuriated readers).

Not only is the correctness of journalism questionable at these so-called News Blogs (I can't digest this term- How can something be both News and Blog?), the professionalism of language used is questionable as well. Look at this story on CNET News. Comparing it to the flavor I get on The New York Times and The Wall Street Journal (notice that WSJ has a separate sub-domain for blogs at http://blogs.wsj.com/ to clearly separate authentic news stories from blog posts), I realize why NYT and WSJ are NYT and WSJ, and why CNET is CNET and perhaps will remain CNET.

It infuriates me how right now CNET News homepage is highlighting 15 stories in large font size, and out of those at least 8 are blog posts (most blog posts on CNET News look so identical to news stories that it's hard to decide what is what). Unless a clearly-visible banner is added to each blog post which indicates that this is a blog post and not a news story (along with cautionary implications of the same), such masquerading of blogs as news stories by CNET is tantamount to misinformation.

My 2 cents on the degrading quality of journalism in the age of the World Wide Web.

Update (18-12-08): A recent story on The Wall Street Journal claims that Google's recent actions are an indicator of a reversal of its previous stance on Net Neutrality. Once again, it's a confused, ignorant and misinformed journalist - making premature conclusions and judgments - to blame. With apparently no fundamental knowledge or understanding of computer science, computer networks, cache, content delivery network, and edge computing, the WSJ journalist is making hyperbolic claims which indicate his state of confusion, ignorance and misinformation. I completely agree with Google's visibly enraged response blasting this story on The Journal.

Sunday, October 05, 2008

Credibility Of OpenOffice.org 3.0 As An Alternative To Microsoft Office 2003 - My Transition Experiences (And More)

From a month or so I've been trying out the new OpenOffice.org 3.0 Release Candidate 1 (I've previously been using Microsoft Office- XP, & 2003- for well over 6 years).

Why am I trying out OpenOffice.org?
  1. To build an understanding of its capabilities, ease-of-use, quality & performance
  2. To find out the issues which are inhibiting its mainstream adoption
  3. To compare it to Microsoft Office & list out major positive & negative differences
  4. To find out if it can really be an alternative to Microsoft Office (This feasibility study is for both my personal use and for deciding whether OpenOffice.org is ready for adoption in SMBs/Enterprises)
Results of my month-long tryout:-
  1. OpenOffice.org is very suitable for my personal needs. It can fulfill all my 'creative' Office-Suite needs. By 'creative' I mean that OpenOffice.org is suitable for purposes of creating documents. It isn't the perfect solution for importing/opening documents in the Microsoft Office binary formats. The importing is buggy and just plain dissatisfactory, and leads to significant productivity loss in form of manual cleanup/editing required to restore the document's original form. However, broader adoption of Office Open XML and OpenDocument formats and improvements to OpenOffice.org's import filters for Office Open XML formats should considerably solve this issue.
  2. OpenOffice.org applications start-up considerably slower than Microsoft Office applications, and this is a significant issue. The effects of this performance lag can be imagined from the woes Web-browser users faced a few years back when Netscape and Mozilla browsers would start-up painfully slowly. Quick application launch is a mandatory requirement for good user experience, and OpenOffice.org needs to bridge this performance gap sooner rather than later.
  3. OpenOffice.org applications require considerably more system memory than Microsoft Office applications. Also, the responsiveness of user interface of OpenOffice.org applications is considerably less than that of Microsoft Office applications (Although in absolute terms it is above satisfactory). In summary, OpenOffice.org has performance issues that need to be addressed immediately. OpenOffice.org would benefit immensely by stringently following the Google User Experience Design principles.
  4. OpenOffice.org is a feature-rich, high-quality and easy-to-use suite of applications. It is close-enough to the ease-of-use of Microsoft Office 2003 applications for it to be declared fit for consumption by general public.
  5. Apart from the performance and file-format-compatibility issues, another significant issue inhibiting mainstream adoption of OpenOffice.org is the lack of awareness (among masses) about its existence, its quality and its suitability as an alternative to Microsoft Office. People just don't know that there exists an office-suite out there which is a credible alternative to the expensive Microsoft Office. How many of us, who are aware of OpenOffice.org, know that there exists an extension system for OpenOffice.org which is akin to Mozilla's Add-ons system? OpenOffice.org needs to learn from Mozilla Foundation and Mozilla Corporation to solve this issue.
  6. Finally, masses (and in this case SMBs and Enterprises as well) are unaware that use of OpenOffice.org in conjunction with the free (and official) Microsoft Office Viewers can be a largely complete and compromise-free combination for users whose needs revolve promarily around opening/viewing Microsoft Office documents obtained from third-parties and first-hand creation of their own documents.
In summary, OpenOffice.org 3.0 is a serious and credible challenger to Microsoft Office 2003. Version 3.0 is well ahead of its relatively unbaked predecessors, and minor annoyances apart, OpenOffice.org 3.0 promises to be the first credible challenger to Microsoft Office. Customers looking to save hundreds or thousands of dollars should adopt OpenOffice.org with open arms, if their specific needs are in line with those outlined in this post. Recommended for adoption by individuals, SOHO and SMBs, since cost of acquisition is a relatively more pressing factor for them than for cash-rich large enterprises which are not willing to make any compromises, whatever be the monetary cost.

Saturday, October 04, 2008

Google Is (Probably) Tampering With Its Search Results (Perhaps To Hurt Microsoft) - Intriguing Evidence

When I read about Microsoft's launch of the SearchPerks! program, I felt like reading about it on Wikipedia (I generally visit Wikipedia to read about something). I typed microsoft searchperks wikipedia into Google and got the top result pointing to the SearchPerks! webpage on Wikipedia.

Specifically, I reached this version of the article. Since at that time this version was the most recent version of the article, it was running as the main article. Once again feeling like puking about Microsoft's desperate actions, I decided to edit this article to give readers a
real perspective of this program.

So in the subsequent hours, I made multiple edits to the article which can be seen here, here, here, here, and here (these are in the order of older to newer). Note that I still believe that my perspective of this program (the way I presented it in Wikipedia) should be included in the Wikipedia article- it is something that deserves to be told to readers since it is true.

From time to time I would visit this webpage (always using Google to get me to the Wikipedia page) to check if someone added-to/edited/removed what I had added in the article. And every time Google would show the wikipedia article as the top search result, whether the query would be searchperks wikipedia, live searchperks wikipedia, windows live searchperk wikipedia or microsoft live searchperks wikipedia.

However, today when I wished to reach this page via Google, Google doesn't show link to the Wikipedia article any longer.

Following screenshots prove this:-

Also, a search conducted only on the Wikipedia in English domain returns zero results for the term searchperks:-

Finally, searching for the URL of the SearchPerks! program returns zero results:-

It seems that Google is manually tampering with its search results.

There is another interesting thing to observe. Look at this screenshot:-

It shows the search results page on Google for the query windows live searchperks wikipedia. Note that there are results from Wikipedia in the list of results returned by Google.

However, when I click on More results from en.wikipedia.org link on the webpage, there are zero results, as visible in the screenshot below:-

This is awkward (and illogical) because if Google's algorithms return results from en.wikipedia.org on the main search results page for the query windows live searchperks wikipedia, then why aren't at least those same results returned when the same query is ran for the domain en.wikipedia.org?

Thursday, October 02, 2008

My Concerns About The Proposed Google-Yahoo Search Advertising Deal (In Context Of AOL Search & Ask.com)

The current state of Web search engines is as follows:-
  1. Google: Search results = Google | Ads = Google
  2. Yahoo: Search results = Yahoo | Ads = Yahoo
  3. Live/MSN Search: Search results = Microsoft | Ads = Microsoft
  4. AOL: Search results = Google | Ads = Google
  5. Ask: Search results = Ask.com | Ads = Google / LookSmart
My present concerns are as follows:-
  1. Google already powers search results on 2 of the top 5 search engines, and ads on 3 of the top 5 search engines. In effect, although we have an impression that there are 'Five' distinct search engines, in reality we have only 4 search engines and only 3 mainstream search-ad engines. AOL and Ask.com nicely create an impression of prevailing competition in the search engine business, while hiding the fact that Google powers them in one way or the other.
  2. Far more important than the number of top search engines powered by Google's search results and Google's ads is Google's 'share' of search results and search ads. If both direct and indirect counts are made, Google is an unquestionable monopoly when it comes to search results and search ads.
  3. Google's share in the search engine market is growing relentlessly month-by-month, further choking the air supply of the few credible alternatives left and sending them into a downward spiral.
  4. Since search engine business is very capital intensive, it's almost impossible for any startup to compete with Google (and other top search engines). Look at Cuil and Wikia Search- both started off with lots of buzz and media coverage, and now have been relegated to the 'virtually non-existent' and 'insignificant' category. Their presence or absence doesn't matter.
  5. It is possible (and likely) that in the next 2 years, Google will have over 90% of search engine market share, a dangerous situation for the Web and for the search business.
  6. Ask.com is the leading underdog out of the top 5 search engines. Its search results page already seems to rely more heavily on Google-powered ads than on it's organic search results and embarrassingly, many times the ads are more relevant than the search results themselves. I have observed that Ask.com gives irrelevant results non-infrequently, and if it wishes to maintain or grow its market share, it must switch to search results of Yahoo. I believe that Yahoo-powered search results and Google/Yahoo-powered ads is a life-savior combination for Ask. Also, Ask should sell its search engine intellectual property (algorithms, engineers, patents, etc.) to Microsoft, as it's unlikely that Ask will be able to compete with the other search engines with its own search results. Finally, the user interface of Ask.com is cluttered, complex and slow, and Ask must revamp its user interface (especially the search results page) if it wishes to stop its audience from defecting to rivals.
The above list will look like the following if the Google-Yahoo deal does take place:-
  1. Google: Search results = Google | Ads = Google
  2. Yahoo: Search results = Yahoo | Ads = Yahoo & Google
  3. Live/MSN Search: Search results = Microsoft | Ads = Microsoft
  4. AOL: Search results = Google | Ads = Google
  5. Ask: Search results = Ask.com | Ads = Google / LookSmart
My additional concerns, if the deal does take place are as follows:-
  1. One out of the only 2 credible Google alternatives (Yahoo and Microsoft) will start to get dependent on Google. Increased cash flows because of Google ads will leave little incentive for Yahoo to innovate and improve its advertising technology. The deal is a poison pill for Yahoo, and although Yahoo contends that increased cash flows from the deal will allow it to make investments to improve its advertising technology, the will make Yahoo's advertising system less attractive for advertisers and Google's platform even more attractive, thus further decreasing the profitability of Yahoo from its own ads, and making it more dependent on Google ads (for revenue). The downward spiral may actually lead to a collapse of Yahoo's ad system.
In summary, I believe that this proposed deal should be blocked, so that Yahoo is forced to innovate and improve its own search engine and advertising network. This will be good for both Yahoo and the Web- in the long term.

Related Posts By Me:-

Wednesday, September 24, 2008

Missing Simplicity - 3 GiB Install of Windows XP


I was surprised, saddened and a little worried when I saw the properties of the Windows folder on my computer. Over 3 GiB just to install a non-stellar operating system? Where are we heading to? Perhaps 20 GiB and 40,000 files just for the install of Windows 7?

I wish to point out that a clean install of Windows XP takes much less space than this and has much less number of files as well. It's only after one installs all the updates, patches, etc. that the Windows directory becomes excessively bulky and cluttered (In fact the System 32 folder on my computer has so many files that even scrolling through it slows down tremendously as Windows Explorer relentlessly tries to render all the file icons).

Windows keeps saving backups of all the old files whenever I update a component, something I just don't want, but something Windows just doesn't allow me to choose. Please Microsoft, I just don't want to go back to IE 6 when I install IE 7, IE 7 when I install IE 8 Beta 1 and IE 8 Beta 1 when I install IE 8 Beta 2. Please don't save IE 6 and IE 7 when I install IE 8. I just don't want them back. I want a clean, light and fast system. So please, at least give me an option to not save anything that I don't want saved.

Missing those days when I used to play the 75 KiB Dave game on a 1 MiB MS-DOS install.

Seventy. Five. Kibibytes. Phew!

Friday, September 12, 2008

Lehman Brothers Holdings Inc. - Collapse of an Icon

It's a slightly emotional moment for me, as I keep track of the series of unfortunate developments taking place one after the other at one of the world's oldest, largest, most well known, most respected and most successful financial titans.

Lehman Brothers Holdings Inc., undoubtedly an icon, is on it's deathbed. I feel emotional about this because in a way, I associate myself with Lehman Brothers. In my final year at college, Lehman came to my college for campus recruitment. And I remember that, shamefully, I just missed it. Yes, I missed it for 2 reasons:
  1. I was at home when Lehman came, recruited people from my college and went back.
  2. My resume wasn't sent to Lehman by my college's Placement Cell (or so I've heard).
I remember how since many days (before Lehman finally came to my college) there was a buzz all around about the package Lehman was going to offer- something in the range of 15 million to 20 million Indian rupees per year. This was (and is) naturally a very rewarding package, which anybody shall be fortunate to have. I had a strong belief that if it's research or consulting posts that Lehman is coming for, then I'll shine for sure. The turn of events was such that I was under an impression that Lehman was going to come some days later, and so I went back home that weekend, and in the meantime Lehman came and went back. And I just missed it. To this day, I don't know what job profile Lehman brought to my college.

I remember feeling quite sad for many days when I came to know that Lehman came and went all of a sudden, unexpectedly, and I didn't even know this. And even to this day I never fail to narrate the story of Lehman to my friends.

So today, when I read about the impending demise of Lehman Brothers, I feel sad, the way Lehman employees are feeling sad.

Staff gathered in a meeting room at the Lehman Brothers office in Canary Wharf in London on Thursday.

Saturday, August 16, 2008

Missed Opportunities - A Photography Problem

"Opportunity Never Knocks Twice"

This quote, one of my guiding principles, can be seen when my Nokia N-Series phone boots up. And how true it is!

Yesterday night I and my brother were watching Buffett and Gates Go Back to School on NDTV Profit, and a student asked Warren what was the worst investment he ever made. Warren's answer was that the bad investments he had made weren't the ones that "showed up" or were "visible", but were the opportunities that he missed.

I realised the truthfulness of this twice in the last 18 hours. There's this problem I (and millions others) face when I do photography- one frequently encounters moments which arrive suddenly, last for a very short time and then quickly disappear... giving one no time to capture them. 2 such moments came and went by in the last 18 hours:-
  1. I was on the Mall Road, driving back towards home. A family, with one little kid, was waiting for cars to pass by, so they could cross the road. As soon as my car went by, the family tried to cross the road quickly. In this hurry, 1 of the 3 balloons the little child was holding slipped off his hands and onto the road. The child tried to run back to fetch his balloon, but his parents pulled him, lest there be an accident, even as the child struggled to bring back his balloon. In this struggle, 2 more of the balloons in that child's hands slipped onto the road. It was a strange feeling to watch the child struggling to go back to the road, while his parents pulled him. The balloons were the child's world. The child is his parents' world. Both the child and the parents are trying to save their world!
  2. Today around 11:30 AM, I was walking in a corridor of the Dayanand Medical College & Hospital. A family was coming from the other side. There was a little girl along with that family, holding her father's finger. While there was sadness and anxiety on faces of all the adult family members (God bless them), the little girl was smiling, jumping and naughtily and playfully laughing. Once again, it was a strange feeling to watch sadness and anxiety on the faces of family members, even as the little girl was smiling and playing (in a hospital- unaware of the words "anxiety", "disease", "life" & "responsibility"). Ignorance is bliss!
And I couldn't capture these moments. Because they came and went by in the blink of an eye. I'll think someday about how to solve this problem.

Sunday, July 06, 2008

Live Search is now better than Ask.com Search Engine (Plus More)

I frequently conduct various tests (queries, user interface checks, speed of loading, etc.) on the top 5 Web search engines (Listed here in lexicographic order- AOL Search, Ask.com Search Engine, Google, Live Search & Yahoo! Search). And by way of these self-created tests, I've built up a quite accurate idea of how each of these search engines fares on the following parameters (The following list has been thought by me, is not listed in any particular order, & can be viewed as an incomplete list of the important aspects of a Web search engine):-
  1. Relevance of results on the SERP to a user's queries.
  2. The time it takes to load homepage of a search engine.
  3. The time it takes to load the SERP of a search engine.
  4. The comprehensiveness of a search engine's index.
  5. The time it takes for a freshly submitted URL to be indexed by a search engine (Currently not applicable in case of AOL Search & Ask.com Search Engine).
  6. The visual appeal of both the homepage and SERP of a search engine (My current favorite is Live Search).
  7. The ease-of-use & intuitiveness of user interface of a search engine (This is not the same as visual appeal).
  8. The amount of advertising (including self-promotional links) shown on homepage as well as on a SERP.
  9. The relevance of ads shown on a SERP to a user's queries.
  10. The features & tools that a search engine provides to a user.
It is to be emphasized again that the above list has been thought by me, is not listed in any particular order, & can be viewed as an incomplete list of the important aspects of a Web search engine.

I have concluded from my today's test of the top 5 Web search engines that Microsoft's Live Search is now better (for daily or general use) than Ask.com Search Engine. In fact Live Search is now so good that it does the job about 75% of the time (For me). Only about 25% of the time do I have to switch to Google or Yahoo! Search to get useful results. I wish to add that till some months back, it was Ask.com Search Engine which was better than Live Search. I also want to add that I do not wish to imply that Live Search has gotten better than Ask.com Search Engine today. Rather it's me who has concluded this now by way of the test I conducted today.

Finally, I wish to add that Live Search's working for me about 75% of the time (assuming Google is given 100% on the comparison scale) does not mean that Live Search has started giving good enough results about 75% of the time in general, or 'for everybody' (i.e. Live Search is as good as Google for every 3 out of 4 queries, on average). It's only for my set of queries that Live Search fares this good. For a different set of queries, Live Search will surely get a different score. Nevertheless, Live Search has improved dramatically. That's beyond doubt.

Friday, June 27, 2008

Why I shall trust Wikipedia a little less from now...

I have had quite a lot of faith in Wikipedia. Till now that is. Although news stories and blog posts (and lately, some of my colleagues and friends as well) have been whining about incorrectness of facts on Wikipedia for quite long now, I somehow still held strong faith in Wikipedia's correctness. Not anymore. Here's why...

I watched Dil Dosti Etc some days back, and after having watched it, I felt like knowing who Sanjay in the movie is, in reality. A quick Google search informed me that it's Shreyas Talpade, and as I started reading about Shreyas Talpade on Wikipedia, I was a little surprised to read his age- 41 years. I was quite surprised that even at the age of 41, he played the role of a college guy in Dil Dosti Etc, and throughout the movie I didn't feel even once that he's 41.

Here's a screenshot of what I saw on Wikipedia
However, today when I made mummy watch Dil Dosti Etc, mummy said she already knows who Shreyas Talpade is, and mummy challenged me that he isn't 41 years old. I quickly showed mummy the article on Wikipedia, but mummy was firm that it just cannot be true. Then I conducted some celebrity searches on Live Search, and found this page on MSN India, from which his current age can be deduced as 34 years. Quite frankly, I was amazed as to how there exists a clear discrepancy between what Wikipedia says and what's mentioned on MSN India.

Here's the page I saw on MSN India
Who did I finally believed? Wikipedia or MSN India? I'll believe MSN India on this one.

And I'll believe Wikipedia a tad less from now.

Installing RAM modules having different frequencies- my experience

My problem is mentioned in detail here (it should be worthwhile to read this link prior to reading this post).

I now had 2 RAM modules
  1. A 512 MB 400 MHz module from Simmtronics.
  2. A 256 MB 333 MHz module from a lesser-known brand.
The 512 MB module was already present in the primary slot. I added the 256 MB module to the secondary slot. When I powered on my system, the BIOS began to give ominous beeps, indicating a problem. Naturally, it was due to the newly added RAM module. I suspected that this combination of 400 MHz and 333 MHz RAM modules isn't going to work on my system. Sigh...

As I was about to remove the 256 MB module, an idea flashed in my mind. I quickly swapped the positions of the modules and powered on my system once again, with higher hope. And guess what? It booted normally! The BIOS showed my system's memory speed as 333 MHz, and the total system memory as 768 MB.

I'm not sure why interchanging positions of the two RAM modules made things work, but what I suspect is this (and this is the idea that had flashed in my mind). Putting the 333 MHz RAM into the first slot caused its frequency to be set as the system's memory frequency, by the BIOS (during power on). And this frequency was imposed upon the module in second slot. Since in this case it was a module that supported higher frequency (400 MHz), it simply ran at a slower frequency, without any problems.

However, if the 400 MHz module is put in the first slot, 400 MHz is set as the system's memory frequency (by the BIOS), and the module in second slot isn't able to support it, triggering the error beep sequence.

I again emphasise that I'm not sure about what in reality is the reason for what happened, and that this is the idea that came in my mind. Most importantly, it worked!

Wednesday, June 25, 2008

A really rude message on uTorrent forums

This is the message I saw (the message can be seen in the image below- the image can be clicked to see it in full size) when I landed on one of the pages of the uTorrent forums. I reached here from one of the search results on Google Search, and the interesting thing is that this is the first time that I have ever been to the uTorrent forums.

It makes me wonder:-
  1. Why was I banned? (especially when this is the first time I've been to the uTorrent forums, and even more so because my computer obtains its IP address using DHCP)
  2. Shouldn't the administrators or the moderators resort to less abusive language when banning someone?

Friday, June 13, 2008

How to correctly enter the cat-only-symbols security code at RapidShare

Some minutes back I wanted to download a file from RapidShare.com. But it looks like they've changed the method they use to ensure that a user is indeed a human. The new method they use- still a CAPTCHA- is not only confusing, it's quite difficult too. One sees the following text when presented with security question:-

Please enter the following code when downloading. Only enter symbols attached to a cat. This is for security reasons. Premium users can jump this step.

And an image like this one is visible

A new user will almost certainly think that every symbol has a cat attached to it. However, since the field to enter the security code is only 4 character long (while there are 7 characters in the image above), it means that some characters out of the above do not have a cat attached to them.

Coming to the point, the key here is to enter those characters which have a head of a cat visible. In the above case, it's NNOO. Characters that don't have a head are invalid ones.

As a side note, I feel that this new system deployed by RapidShare is unnecessarily confusing. CAPTCHA systems used by Google and Microsoft, as shown below, do the job (of ensuring that a user is a human) nicely, without spoiling the user experience.