Friday, December 18, 2009

Yeh Jo Zindagi Ki Kitaab Hai

Yeh Jo Zindagi Ki Kitaab Hai, Yeh Kitaab Bhi Kya Kitaab Hai,
Kahin Ek Haseen Sa Khawaab Hai, Kahin Jaan Leva Azaab Hai,

Kahin Chaaon Hai Kahin Dhoop Hai, Kahin Ek Haseen Sa Roop Hai,
Kai Chehray Is Main Chupay Huye, Ek Ajeeb Si Yeh Nikab Hai,

Kahin Kho Dia Kahin Paa Liya, Kahin Roo Dia Kahin Gaa Liya,
Kahin Chain Leti Hai Zindagi, Kahin Haar Baane Bemisaal Hai,

Kahin Ansoon Ki Hai Dastaan, Kahin Muskurahatoon Kabeyaan Hai,
Kahin Barkatoon Ki Hain Baarishain, Kahin Tishniggi Bemisaal Hai,

Yeh Jo Zindagi Ki Kitaab Hai, Yeh Kitaab Bhi Kya Kitaab Hai,
Kahin Ek Haseen Sa Khawaab Hai, Kahin Jaan Leva Azaab Hai

Thursday, December 17, 2009

बस मुस्कुराना चाह्ता हूं

एक ऐसा गीत गाना चाह्ता हूं, मैं..
खुशी हो या गम, बस मुस्कुराना चाह्ता हूं, मैं..

दोस्तॊं से दोस्ती तो हर कोई निभाता है..
दुश्मनों को भी अपना दोस्त बनाना चाहता हूं, मैं..

जो हम उडे ऊचाई पे अकेले, तो क्या नया किया..
साथ मे हर किसी के पंख फ़ैलाना चाह्ता हूं, मैं..

वोह सोचते हैं कि मैं अकेला हूं उन्के बिना..
तन्हाई साथ है मेरे, इतना बताना चाह्ता हूं..

ए खुदा, तमन्ना बस इतनी सी है.. कबूल करना..
मुस्कुराते हुए ही तेरे पास आना चाह्ता हूं, मैं..

बस खुशी हो हर पल, और मेहकें येह गुल्शन सारा "अभी"..
हर किसी के गम को, अपना बनाना चाह्ता हूं, मैं..

एक ऐसा गीत गाना चाह्ता हूं, मैं..
खुशी हो या गम, बस मुस्कुराना चाह्ता हूं

Wednesday, December 2, 2009

Automated Regression Testing Challenges in Agile Environment

Abstract

Recently, when I wanted to start my new Automated Testing Project with four resources, I thought of applying any one of the Agile methodologies. But I was not able to continue because, a series of questions were raised inside my mind. The questions are like “Is it possible to use Agile methodologies in Automated Testing?”, “Can I use traditional tools”, “Should I have to go for open-source tools”, “What are the challenges I have to face if I am implementing automation in Agile Environment”. In this article let us analyze some of challenges we face while implementing Automation with Agile methodologies. Automated testing in the Agile environment stands a risk of becoming chaotic, unstructured and uncontrolled.


Agile Projects present their own challenges to the Automation team; Unclear project scope, Multiple iterations, Minimal documentation, early and frequent Automation needs and active stakeholder involvement all demand lot of challenges from the Automation Team. Some of these challenges are:

Challenge 1: Requirement Phase

Test Automation developer captures requirements in the form of “user stories”, which are brief descriptions of customer-relevant functionality.

Each requirement has to be prioritized as follows:

High: These are mission critical requirements that absolutely have to be done in the first release
Medium: These are requirements that are important but can be worked around until implemented.
Low: These are requirements that are nice-to-have but not critical to the operation of the software.

Once priories are established, the release “iterations” are planned. Normally, each Agile release iteration takes between 1 to 3 months to deliver. Customers/software folks take liberty to make too many changes to the requirements. Sometimes, these changes are so volatile that the iterations are bumped off. These changes are greater challenges in implementing Agile Automation testing process.

Challenge 2: Selecting the Right Tools

Traditional, test-last tools with record-and-playback-features force teams to wait until after the software is done. More over, traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile Automation teams. Automation in the early stages of an agile project is usually very tough, but as the system grows and evolves, some aspects settle and it becomes appropriate to deploy automation. So the choice of testing tools becomes critical for reaping the efficiency and quality benefits of agile.

Challenge 3: Script Development Phase

The Automation testers, developers, business analysts and project stakeholders all contribute to kick-off meetings where “user-stories” are selected to next sprint. Once the “user-stories” are selected for the sprint, they are used as the basis for a set of tests.

As functionality grows with each iteration, regression testing must be performed to ensure that existing functionality has not been impacted by the introduction of new functionality in each iteration cycle. The scale of the regression testing grows with each sprint and ensures that this remains a manageable task the test team use the test automation for the regression suite.

Challenge 4: Resource Management

The Agile approach requires a mixture of testing skills, that is, test resource will be required to define unclear scenarios and test cases, conduct manual testing alongside developers, write automated regression tests and execute the automated regression packages. As the project progresses, specialist skills will also be required to cover further test areas that might include integration and performance testing. There should be an appropriate mix of domain specialist who plan and gather requirements. The challenging part in the Resource management is to find out test resources with multiple skills and
allocate them.

Challenge 5: Communication

Good communication must exist among Automation testing team, developers, business analysts and stake holders. There must be highly collaborative interaction between client and the delivery teams. More client involvement implies more suggestions or changes from the client. It implies more bandwidth for communication. The key challenge is that the process should be able to capture and effectively implement all the changes and data integrity needs to be retained. In traditional testing, developers and testers are like oil and water, but in agile environment, the challenging task is that they both must work together to achieve the target.

Challenge 6: Daily Scrum Meeting

Daily Scrum Meeting is one of the key activities in Agile Process. Teams do meet for 15 minutes stand up sessions. What is the effectiveness of these meetings? How far these meetings help Automation practice Developers?

Challenge 7: Release Phase

The aim of Agile project is to deliver a basic working product as quickly as possible and then to go through a process of continual improvement. This means that there is no single release phase for a product. The challenging part lies in integration testing and acceptance testing of the product.

If we can meet these challenges in a well optimized manner, then Automated Regression Testing in Agile environment is an excellent opportunity for QA to take leadership of the agile processes. It is better placed to bridge the gap between users and developers, understand both what is required, how it can be achieved and how it can be assured prior to deployment. Automation practice should have a vested interest in both the how and the result, as well as continuing to assure that the whole evolving system meets business objectives and is fit for purpose.

Tuesday, December 1, 2009

Software Test Estimation - 9 General Tips on How to Estimate Testing Time Accurately

For success of any project test estimation and proper execution is equally important as the development cycle. Sticking to the estimation is very important to build good reputation with the client.

Experience play major role in estimating “software testing efforts”. Working on varied projects helps to prepare an accurate estimation for the testing cycle. Obviously one cannot just blindly put some number of days for any testing task. Test estimation should be realistic and accurate.
In this article I am trying to put some points in a very simple manner, which are helpful to prepare good test estimations. I am not going to discuss the standard methods for test estimations like testing metrics, instead I am putting some tips on - How to estimate testing efforts for any testing task, which I learned from my experience.

Factors Affecting Software Test Estimation, and General Tips to Estimate Accurately:

1) Think of Some Buffer Time: The estimation should include some buffer. But do not add a buffer, which is not realistic. Having a buffer in the estimation enables to cope for any delays that may occur. Having a buffer also helps to ensure maximum test coverage.

2) Consider the Bug Cycle: The test estimation also includes the bug cycle. The actual test cycle may take more days than estimated. To avoid this, we should consider the fact that test cycle depends on the stability of the build. If the build is not stable, then developers may need more time to fix and obviously the testing cycle gets extended automatically.

3) Availability of All the Resources for Estimated Period: The test estimation should consider all the leaves planned by the team members (typically long leaves) in the next few weeks or next few months. This will ensure that the estimations are realistic. The estimation should consider some fixed number of resources for test cycle. If the number of resources reduces then the estimation should be re-visited and updated accordingly.

4) Can We Do Parallel Testing?: Do you have some previous versions of same product so that you can compare the output? If yes, then this can make your testing task bit easier. You should think the estimation based on your product version.

5) Estimations Can Go Wrong: - So re-visit the estimations frequently in initial stages before you commit it. In early stages, we should frequently re-visit the test estimations and make modification if needed. We should not extend the estimation once we freeze it, unless there are major changes in requirement.

6) Think of Your Past Experience to Make Judgments!: Experiences from past projects play a vital role while preparing the time estimates. We can try to avoid all the difficulties or issues that were faced in past projects. We can analyze how the previous estimates were and how much they helped to deliver product on time.

7) Consider the Scope of Project: Know what is the end objective of the project and list of all final deliverables. Factors to be considered for small and large projects differ a lot. Large project, typically include setting up test bed, generating test data, test scripts etc. Hence the estimations should be based on all these factors. Whereas in small projects, typically the test cycle include test cases writing, execution and regression.

8 ) Are You Going to Perform Load Testing?: If you need to put considerable time on performance testing then estimate accordingly. Estimations for projects, which involve load testing, should be considered differently.

9) Do You Know Your Team?: If you know strengths and weaknesses of individuals working in your team then you can estimate testing tasks more precisely. While estimating one should consider the fact that all resources may not yield same productivity level. Some people can execute faster compared to others. Though this is not a major factor but it adds up to the total delay in deliverables.

And finally tip number 10.

Over To You! This test estimation tip is purposefully left blank so that you can comment your best estimation techniques in below comment section.

Monday, August 10, 2009

mujhko bhi tarkeeb sikha kuchh

mujhko bhi tarkeeb sikha kuchh yaar julahe aksar tujhko dekha hai ke tana bunte bunte jab koi taga toot gaya ya khatam hua to phir se usme bandh sira koi jor aage bunane lagte ho tere is tane mein lekin,ik bhi ganth girh buntar ki dekh nahi sakta hai koi maine..to buna tha ik bar ek hi rishta lekin uski saari girhain...saaf nazar aatee hain

Thursday, July 30, 2009

Yeh mere chehre ki hansi toh bas dikhaava hai

Yeh mere chehre ki hansi toh bas dikhaava hai॥
Dil mein palte dard ko chupaane ke liye॥
Aankhein band karli hai maut se pehle hi humne..
Kya bacha ab usse apne aansu dikhaane mein..
Sab kuch toh bhool chukke thei unse milke..
Bas ek bohat bada dil chaahye usse bhulaane ke liye..
Meri haalat dekh ke jo aaj hans pade vo..
Kabhi khud roya karte thei mujhe hasaane ke liye..
Hum pe bhi vo waqt aaya tha,Kisi ko aankhon mein basaaya tha,
Ussi ne lutt liya mera jahaan,Jisse iss dil ki dhadkan banaaya tha
Tanhaiyo me baith ke dil ko manate hain..
Apne zakhmo per khud marham lagate hain...
Mumkin nahi tha usse kabhi apna bana na..
Roz phir bhi ye khwab ham sajate hain..
karke gaye hain wada ham se phir-milne ka..
Bas intezar me unke ham waqt guzarte hain..
Maana k woh bewafa hain, magar phir bhi'Karke usse yaad dard-e-dil aur badate hain...
Thak-gayi uska rasta dekhte dekhte..
Koi-Bataye jo chala gaya,use kaise bhulate hain..
Baadal hain aankh ke jo baraste hi nahi hain
Mujhko choone ko haath unke larazte hi nahi hain
Mujhko gale laga lo,mujhko kahi chupa lo
Roz farishte aasman se utarte hi nahi hain
Unko maloom hai ke hans kar main gum ko chupati hu
Yu dikhate hai jaise mujhko samajhte hi nahi hain
Jab bhi dua maangu,unki khushiya dua me maangu
Wo mere dil ko dukhane me jhijhakte hi nahi hain
Qayamat ke iss safar me kaha khud ko bhula baithe
Unki yaado ke saaye humse bichadte hi nahi hainye jo kwasho ka prnd he,
ise mosmo se ghrz nhe,ye ure ga apne he moj me,i
se ab de ya srab de,kbhe yun bhe ho tere rubro mein aa ke ye keh skon,
mere hasrton ko shmar ker mere khahisho ka hisab de

Thursday, July 23, 2009

truth in life

There's one sad truth in life I've foundWhile journeying east and west -The only folks we really woundAre those we love the best.We flatter those we scarcely know,We please the fleeting guest,And deal full many a thoughtless blowTo those who love us best.

Friday, July 17, 2009

ज़िन्दगी यूँ हुयी बसर तनहा

ज़िन्दगी यूँ हुयी बसर तनहा
काफिला साथ और सफर तनहा

अपने साए से चौंक जाते है
उम्र गुजरी है इस कदर तनहा

रात बार बोलते हैन सन्नाटे
रात काटे कोई किधर तनहा

दिन गुज़रता नहीं है लोगों में
रात होती नहीं बसर तनहा

हमने दरवाज़े तक थो देखा था
फ़िर न जाने गए किधर तनहा

Ya Dil Ki Suno Duniya Walon Ya Dil Ki Suno Duniya Walon

yaa dil kee suno duniyaawaalon
yaa muz ko abhee choop rahane do
mai gam ko khushee kaise kah doo
jo kahate hain unako kahane do

ye fool chaman mein kaisaa khilaa
maalee kee najar mein pyaar nahee
hasate huye kyaa kyaa dekh liyaa
ab bahate hain aansoo bahane do

yek khwaab khushee kaa dekhaa nahee
dekhaa jo kabhee to bhool gaye
maangaa huaa tum kuchh de naa sake
jo tum ne diyaa wo sahane do

kyaa dard kisee kaa legaa koee
itanaa to kisee mein dard nahee
bahate huye aansoo aaur bahe
ab ayesee tasallee rahane do

Wednesday, April 1, 2009

कांच की बरनी और दो कप चाय - एक बोध कथा

I received this wonderful Story,Which has been sent by one of my dearest friend, I hope you would also love it, click to enlarge the image if you are unable to see it.



Wednesday, January 21, 2009

How to Change the Virtual Memory Paging File in Vista

Information

If your computer lacks the random access memory (RAM) needed to run a program or operation, Vista uses virtual memory to compensate. Virtual memory combines your computer’s RAM with temporary space on your hard drive. When RAM runs low, virtual memory moves data from RAM to a space called a paging file. Moving data to and from the paging file frees up the RAM to complete its work. By default Vista will manage virtual memory automatically. This will show you how to manually change the size of the paging file.
NOTE

The more RAM your computer has, the faster your programs will generally run since Vista may not have to use virtual memory as often. If a lack of RAM is slowing your computer, you might be tempted to increase virtual memory to compensate. However, your computer can read data from RAM much more quickly than from a hard disk, so adding RAM is a better solution. Plus, Vista usually does a great job at managing virtual memory for you. Another option is have the paging file on another hard drive (step 10), not partition, instead that is as fast or faster then the hard drive Vista is installed on.
The Virtual Memory Paging File is located at: C:\pagefile.sys
Tip

To improve the performance of Vista, you can place the paging file on a second physical hard drive instead of the same C: drive that Vista is on. Doing this allows Vista to dump temp junk onto one drive while not having to interrupt reads or writes on the other drive. You can expect a 5 to 10% increase in speed depending on the speed of your hard drives.
WARNING

If you receive error messages that warn of low virtual memory, you need to either add more RAM or increase the size of your paging file so that you can run the programs on your computer. Vista manages the size automatically, but you can manually change the size of virtual memory if the default size is not enough for your needs or you wish to change what drive is used for the paging file.



Here's How:

1.
Open the Start Menu.
A) Right click on Computer and click Properties.
B) Go to step 3.
OR

2. Open the Control Panel (Classic View).
A) Click on the System icon.
3. Click on Advanced system settings. (See screenshot below)
NOTE: While your here, note how much Memory (RAM) you have installed under the System section.

4. Click on Continue in the UAC prompt.
5. In the Advanced tab, click on the Settings button in the Performance section. (See screenshot below)
advanced_system_properties.jpg
6. Click on the Advanced tab. (See screenshot below)
7. Under Virtual memory, click on the Change button.
advanced_performance_options.jpg
8. To Turn Off Automatic Virtual Memory Management for All Drives -
A) Uncheck the Automatically manage paging file size for all drives box. (See left screenshot below step 9)
NOTE: This turns off automatic virtual memory management by Vista so you can manually change the drive and size to what you want instead.

B) Go to step 10.
9. To Turn On Automatic Virtual Memory Management for All Drives -
A) Check the Automatically manage paging file size for all drives box. (See right screenshot below)
B) Click OK.
C) Go to step 15.
automatic_yes_no.jpgautomatic_yes_no2.jpg
10. To Select a Drive to Add or Change the Paging File -
NOTE: By default, Vista uses the same drive letter that it is installed on. This system drive is usually the C: drive.
WARNING: If you have another drive listed and want to use it instead, then make sure it is as fast or faster than the drive Vista is installed on. Make sure you only use a separate hard drive, not another partition on the same hard drive as Vista is installed on. This will cause a decrease in performance if you do.
A) Click on a listed hard drive you want to change or add a paging file to for Vista to use. (See right sceenshot above)
11. To Have a Custom Paging File Size for the Selected Drive -
NOTE: You would do this if you do not want to use the automatic system managed size by Vista.
A) Dot Custom size. (See screenshots below step 15)
B) Type in a size for the Initial size in MB.
NOTE: This usually would be the amount of RAM installed on your computer plus 300 MB. (1 GB = 1024 MB)

C) Type in a size for the Maximum size in MB.
NOTE: This usually would be 2.5 to 3 times the amount of RAM installed on your computer.

D) Go to step 14.
12. To Have a System Managed Paging File Size for the Selected Drive -
NOTE: This will let Vista automatically manage the size of the paging file for this selected drive as needed.
A) Dot System managed size. (See screenshots below step 9)
B) Go to step 14.
13. To Remove the Paging File from the Selected Drive -
WARNING: Make sure that you have at least one drive selected to have a paging file on. Otherwise your computer may slow down dramatically.
NOTE: You would usually only do this if you have more than one drive that you already added a paging file to from step 11 above.
A) Dot No paging file. (See screenshots below step 15)
14. Click the Set button. (See left screenshot below)
NOTE: Repeat steps 10 to 14 if you would like to make more changes to the paging file, or add a paging file to another listed drive.

15. Click on OK. (See right screenshot below)
custom.jpgcustom2.jpg
16. If the Paging File Size was Decreased -
NOTE: If the paging file was decreased, the computer will need to be restarted before the changes can be applied. You will not see this if you increased the size.
A) Click OK. (See screenshot below)
decrease_ok.jpg
17. Click on OK. (See screenshot below step 7)
18. Click on OK. (See screen shot below step 5)

19. If the Paging File Size was Decreased -
NOTE: You will not see this if you increased the size.
A) Click Restart Now. (See screenshot below)
NOTE: Be sure to save and close anything open first. This will restart the computer immediately.
Name:  Restart_Now.jpg Views: 5075 Size:  36.0 KB

Tuesday, January 20, 2009

Find and Fix Vulnerabilities before Your Application Ships

In software development, a small coding error can result in a critical vulnerability that ends up compromising the security of an entire system or network. Many times, a security vulnerability is not caused by a single error, however, but rather by a sequence of errors that occur during the course of the development cycle: a coding error is introduced, it goes undetected during the testing phases, and available defense mechanisms do not stop a successful attack.

Security must be a priority in all phases of software development. Effort should be aimed at preventing software vulnerabilities—detecting them before release, of course, but also limiting their practical impact (for example, by reducing the product's attack surface). At Microsoft, such a holistic approach to security is implemented through the Security Development Lifecycle (SDL), which covers all major phases of software development, including educating developers, improving design, employing coding and testing practices, and preparing for emergency responses after the release of a product, as you can see in Figure 1. SDL is not the only way to approach code review, but it does form the basis for much of what we'll cover here.


Figure 1 The Security Development Lifecycle

In this article we will discuss manual security code reviews performed by developers or security experts. In a process defined by the SDL, such efforts usually take place during a security push or penetration-testing engagement and are associated with a final security review. Coding errors can be found using different approaches, but even when compared to sophisticated tools, manual code reviews have clearly proven their value in the areas of precision and quality. Unfortunately, manual code reviews are also the most expensive to execute.

We also intend to discuss in detail the advantages and disadvantages of security code reviews in the context of large software projects. This article has been prepared based on experiences gathered over time, through reviews of major products released by Microsoft over the last few years.

Software Security Vulnerabilities

A software product of nontrivial size and complexity should never be assumed free of security vulnerabilities. All possible steps should be taken to limit the number of coding errors and reduce their practical impact, but something is always missed. Software errors that affect security (referred to as vulnerabilities) can exist at different levels of the application and be introduced during different phases of the development cycle.

Vulnerabilities are not limited to code. They can be introduced as early as the requirements definition in the form of a requirement that cannot be implemented in a secure manner. The basic design of a product may also contain flaws. For example, inappropriate technologies may be selected or their use may be incorrect from a security point of view. Ideally all of these problems will be identified by design reviews or threat modeling during the early stages of product development.

Security code reviews are primarily aimed at finding the code-level problems that still cause a majority of security vulnerabilities. These efforts may result in the identification of design issues as well, but such issues could be related to needed improvements in threat modeling and other aspects of the development process. Source code reviews can also be conducted with non-security-related priorities. But in this specific context, the goal is to find code vulnerabilities that could be used to either break security guarantees made by the product or to compromise the security of the system.

A coding error that can potentially cause security problems (such as problems due to lack of validation) must fulfill specific conditions to constitute a security vulnerability. There must be a security boundary to attack, and an attacker needs to have some level of control over the data or the environment. Problems that may exist in code that is executed within the same security context as the attacker offer no potential privilege gain in exploiting the vulnerability. In other cases, vulnerabilities exist, but they are located in code that cannot be executed because it is not accessible by the attacker. These coding errors, although they may affect product reliability, should not be considered actual vulnerabilities. The ultimate goal of security code reviews is to find code vulnerabilities that are accessible by an attacker and that may allow the attacker to bypass a security boundary.

Note that the accessibility of a vulnerability is not equivalent to its exploitability; a successful attack may still be mitigated by platform enhancements such as the /GS flag, the /SafeSEH flag, or Address Space Layout Randomization (ASLR).

The exploitability of a code vulnerability is not in the scope of investigations performed during code reviews, primarily because it is usually impossible to prove that available exploitation mitigations are sufficient. The responsibilities of a code reviewer end with confirmation that a code vulnerability exists and that it is appropriately triaged. From that moment the bug should be considered simply another problem that requires a fix.

Identifying code vulnerabilities is a primary goal of security code reviews, but there can be additional outcomes from the effort. A reviewer may provide feedback about overall code quality, redundancy, dead code, or unnecessary complexity. Reviewers may also deliver recommendations for improvements in reducing the attack surface, data validation, code clean-up, or code readability (such as improving comments). However, since documenting such outcomes takes time, you should decide before starting whether results should also include such recommendations or whether efforts should focus solely on identifying security problems.

Finding Coding Errors

There are different approaches to finding code errors. Since each has both unique advantages and practical limitations, it is important to understand the difference between code reviews and other options. For the purposes of this article, we will assume that both source code and design documentation are available, and that a so-called "white-box" analysis of the product is performed (an internal analysis, as opposed to a "black-box" approach that focuses only on externally visible behavior).

Code review can be described as a manual and static approach to finding coding errors. Using similar descriptions, two other popular approaches can be described as an automated static approach and an automated dynamic approach. The automated static approach usually takes the form of static code analysis tools that operate on source code in an attempt to identify known types of problems defined using patterns. PREfix and PREfast are examples of this approach. The automated dynamic approach is represented by automated code testing techniques (such as fuzzing), which are primarily focused on files, protocols, and APIs. Although these solutions can be applied also in a black-box approach, much better results can often be achieved if information about internal elements such as file formats is available and used appropriately.

Each approach has certain practical advantages and limitations. Static code analysis tools allow more code to be processed through automation, but findings are strictly limited by the set of predefined patterns for known types of problems. The results may often also contain a large number of false positives that make addressing issues difficult with limited resources. Fuzz testing can be easily automated and conducted on a continuous basis, but it operates in at least a partially random manner and may have problems with reaching deeper parts of the code. In most cases it is relatively easy to conduct basic fuzzing, yet it is much more difficult to achieve complete coverage of critical code paths.

Compared to automated methods, the advantages and disadvantages of manual source code reviews are connected primarily with direct involvement of a human reviewer. Results of efforts may differ significantly because they depend on a participant's experience with specific technologies, architectures, and scenarios. Generally, it is wise to assume that a human reviewer still has advantages over a machine-based solution. The reviewer can learn throughout the process, understand the context of software components, and interact with designers and developers. Additionally, a reviewer can provide feedback that is not limited only to a detailed issue report, but can also include high-level recommendations.

At the same time, however, a reviewer is susceptible to human error, fatigue, or boredom. The scope of a review is usually limited, and evaluating the quality of the review may be difficult since it is usually based on the subjective confidence of the reviewer.

These disadvantages can be overcome, however. One can design a task-specific review, aimed at analysis of a code block in multiple passes for only one type of error at a time. Using another approach, multiple reviewers can perform independent reviews of a critical piece of code, thus limiting the probability of human error. Code reviews are one of the specific cases where redundancy has huge potential value as it allows overcoming the limitations of human involvement.

Manual code review should never be considered as the ultimate solution for finding code vulnerabilities or as a replacement for other approaches, but rather as a complementary solution. Code reviewing should never be introduced to replace threat modeling, fuzzing, or enforcing coding best practices. Compared to other approaches, code reviewing is usually more expensive, so it should be used primarily in the scope of the most critical areas and problems where the effectiveness of other approaches is limited.

The Code Review Process

Security code review is most successful if it is planned and executed in the context of other security-related efforts such as threat modeling (see Figure 2). Additionally, the results from code reviews can show additional value by improving other security tasks such as testing and design.

Figure 2 Code Reviews

The value of good threat models for code reviewing can hardly be overestimated. To conduct a successful code review, a reviewer must have a good understanding of a product's goals, its design, and the technologies used for its implementation. The first two areas are also covered by threat modeling, and, although they focus on finding high-level problems, they include research also useful in code reviewing. These two kinds of security efforts are generally complementary; threat modeling helps to identify a critical area of code that then becomes a subject of detailed review. Results from a code review can likewise be used to validate (or question) security assumptions specified in a threat model.

In an ideal situation, a security code review would begin with a review of the quality threat models and design specifications, then move to source code. Before beginning, all development work on code within the scope of the review should be completed. To catch the simpler issues, available security tools should be run against the code before manual review begins. Finally, to avoid finding already-known issues, all previously identified security bugs should be fixed.

The security team should plan code reviewing efforts sufficiently early in the development lifecycle to identify the most suitable time for its execution. An actual plan should also address organizational decisions that might affect the overall return on this investment. For example, there are three likely options for selecting participants in a code review. First, reviewers can be external to the code and product (a code review expert from a penetration testing team). Second, a reviewer can be external to the code but familiar with the product (a developer from another team in the same organization). Finally, reviewers can be familiar with both the code and the product (such as the developer working on the code).

Each option has advantages as well as limitations. In the case of reviewers external to a product, usually skilled security experts are selected and they can make a big difference thanks to their unique experience and perspective. However, in most cases they will have a lower level of understanding of a product's internals and implementation details compared to members of the product team. Developers who wrote the specific fragments of code usually understand them best, but they are also the most susceptible to creator's bias and might therefore overlook significant coding errors.

None of these options is an ultimate solution that can be automatically applied to every case. Code reviewing depends more on human participants than technical solutions and thus it needs to be adopted by the team that is assigned to the effort. Some teams achieve the best results by discussing and analyzing the code in group meetings. In other cases, developers obtain the best results by walking through the code on their own. The team should choose whatever strategy allows participants to be effective in reviewing a product. This general rule applies not only to planning and organizing a code review, but to all challenges that need to be faced in such a process.

Prioritizing Review Efforts

The most important rule for code reviewing is realizing that there is never enough time to review all the code you would like to review. One of the biggest challenges, therefore, is selecting and prioritizing the product's components or code fragments that belong in the primary, or at least initial, scope of analysis. To achieve this goal, it is necessary to understand the design of a product and the role of specific technologies used to implement it. The prioritization is a high-level analysis and it defines the framework for the whole engagement.

The prioritization begins with analysis of threat models and other documentation available for a product. If properly done, a threat model can provide a lot of useful data about relations between different components, entry points, security boundaries, dependencies, and assumptions in the design or implementation. Since the goal of the prioritization is to gain understanding of the implementation details, information from a threat model must then be compared against the code itself, analyzing information about the code's structure, quality, or development process.

Prioritization can be divided into four major tasks, which are presented in Figure 3 along with examples of questions that may be helpful in finding required information.

Figure 3 Code Review Prioritization Tasks and Questions


Understand the Environment and Technologies Used to Implement the Product

Is it a Trusted Computing Base (TCB) component: kernel driver, subsystem, or privileged service?

Is it started by default, co-hosted, or running as a dedicated process?

Is it a reusable plug-in, driver, codec, protocol handler, or file parser?

Does it load and directly invoke other code (third-party libraries)?

What underlying technologies are used (RPC, DCOM, ActiveX, WMI, SQL)?

What languages (C/C++, C#, Visual Basic) and libraries (STL, ATL, WPF) are used?

Enumerate All Sources of Untrusted Input Data (Entry Points)

Is it available via network (TCP, UDP, SMB/NetBIOS, named pipes)?

Does it use IPC communication (LPC, shared sections, global objects)?

Can it be driven programmatically (automation or scripting)?

What resources does it use (registry, files, databases)?

Does it have a UI, parse command-line arguments, or use environment variables?

Does it directly communicate with external hardware (USB driver)?

Determine Who Can Access Entry Points and under What Conditions

Is it a security boundary that allows for potential elevation of privilege?

Is this entry point limited to remote, local, or physical access?

Are there access restrictions on transport, interface, or function level?

Is authentication required (anonymous, authenticated, service, admin)?

What mechanisms are used for authentication, securing secrets, and so on?

Are there any other constraints (allow/deny lists, certificates)?

Include Nontechnical Context of Code

Is it new or legacy code? Is it still in development or in maintenance mode?

Was the security context changed (code moved from user to kernel mode)?

What is the history of security efforts in this product team or organization?

What is the history of the product's security?

What is the security awareness of the team or organization?

What are the results from previous code reviews?

What kind of problems have been detected using automated tools? Where?

The first task is related to understanding the technological context of the code. This context covers not only the specific technologies that are used in a product, but also operating system and third-party dependencies as well as tools used in development. The goal of this task is to identify relationships between a product and other systems, applications, or services. Based on these relationships, it is possible to determine what components a product relies on as well as what other software depends on your product. In the context of security, these relationships determine how a product affects the rest of the system and how it may be affected by it. Some high-risk areas become visible through this process.

The high-risk areas are usually associated with concepts such as security boundaries and potential attack vectors. In practice, it all can be reduced to data that can be controlled by an attacker, which should be considered untrusted, and entry points from which this data can come. The team must analyze each of these entry points in relation to guarantees and assumptions about incoming data. One key question is when and how data is validated. This question can often be related to actual data characteristics (can it change asynchronously or be sent out of order?). It can also be related to characteristics of an entry point, such as whether it is available by default (a network service or programmatic interface) or whether it is created as a result of a user's action (a click or opening a data file).

The characteristics of an entry point are also in the scope of the next big task: analysis of trust relationships and access control. A product may have many different entry points, but its actual attack surface is defined only by those that are accessible to an attacker across a security boundary. Each of the entry points should therefore be investigated to verify who can access it and under what conditions. Unauthenticated access to wide functionality is obviously more interesting for an attacker than entry points accessible only with administrative privileges. In the latter case, the review may be limited to just the piece of code responsible for actual authentication (including the code preceding it).

Last but not least, nontechnical sources can also provide data useful for prioritization of code review. Code is typically created by teams of developers; teams usually change over time as new members join and others leave. Code also has its own security history, sometimes not only in the scope of a specific product. In most cases it can be assumed that newer code is of better quality than legacy code, but there can be exceptions. Talking with developers, reviewing documentation from previous reviews, or sampling code may provide very useful input. To achieve the best utilization of available resources, the security team should use any data that can help reviewers to remain effective throughout the process.

Code Reviewing Tactics

The challenges connected with code reviews of nontrivial applications result primarily from the size and complexity of software. Modern products are created using a variety of different technologies, programming models, coding styles, and code management policies. In practice, the complete and up-to-date documentation is rarely available. A reviewer usually faces much more information than can be processed by a human in an acceptable timeframe. To deal with information overload during the review of complex products, certain practices, referred to as tactics, have been developed. We'll discuss the two most common.

The first tactic, which we call contextual code analysis, is the natural consequence of prioritization and high-level analysis efforts. It relies on precise and context-aware analysis of data flows in areas of code directly involved with processing untrusted data that was passed across a security boundary. The advantage of such a targeted approach is the possibility of focusing on the code that is more likely to be attacked and thus achieve good coverage in the scope of the most critical code paths.

Contextual code analysis can be divided into two passes. The first one starts with an entry point and an understanding of the level of control an attacker has over input data. From this point, the developer conducts a data flow analysis aimed at tracking how variables and data structures change with execution of the program. The analysis is performed on a per-function basis, starting with input arguments, then moving through all execution paths, simultaneously monitoring changes in states of variables and structures.

Every function that is called and takes data controlled by an attacker (coming from an entry point and not yet validated) should be analyzed in the same way. Similarly, any propagation of data to global data structures should be flagged so that every reference to it can later be analyzed. At this point, navigation through the code is aimed at improving understanding of code, discovering its structure, and flagging places for further detailed investigations.

Obviously in the case of security code reviews, one needs to pay special attention to code areas that are more likely to have security problems. Examples of such places are listed in Figure 4.

Figure 4 High-Risk Code Paths for Review

User identification, authentication, and data protection

Authorization and access checks

Code that preprocesses untrusted data (network packet parsers, format readers)

Unsafe operations on buffers, strings, pointers, and dynamic memory

Validation layers ensuring untrusted data is in valid format

Code responsible for conversion of untrusted data to internal data structures

Logic involved with interpreting untrusted data

Places making assumptions about the data itself as well as behavior of its source

Code involved in handling error conditions

Usage of OS resources and network (files, registry, global objects, sockets)

Problematic areas typical to the environment in which the code executes

Usage of problematic API or violations of API contracts

If specific code doesn't process any data that could originate from an entry point or processes data that was already validated, it stops being relevant from a security point of view and the reviewer should move forward to other areas. At the end of the day, a reviewer should have a general understanding of the component's code as well as the list of locations flagged as potentially interesting places in the code. Using this list, the reviewer can begin the second pass of contextual analysis, which is focused on an in-depth investigation of selected areas of code.

We call the second code reviewing tactic pattern-focus code analysis. In this case, a reviewer starts at an arbitrary place in the code and looks for known types of potential security vulnerabilities. Although different supporting tools may be applied, "known" doesn't have to refer to any formal patterns, but rather to the reviewer's individual experience and intuition. For each vulnerability candidate, a reviewer follows up all code paths in order to determine whether the coding error actually represents a vulnerability—processing data that can be controlled by an attacker over a security boundary. If correct validation is identified at any level, the error should not be considered a security vulnerability, although it still may be identified as a defense-in-depth or non-security issue that requires a fix.

Pattern-focused code analysis can be used with limited understanding of an application, since a reviewer can focus on the quality and correctness of selected fragments of code rather than their role or location within a system. This allows for the coverage of a significant amount of code in the scope of specific patterns such as bad code constructs or problematic API calls. However, analyzing potential coding errors in isolation from the context of an entire application usually does not give enough information to determine whether it is a real problem. This may lead to overlooking serious bugs or fixing nonrelevant ones, posing additional long-term consequences to the application. If a bug is described only in local context, it may be tempting to introduce a local fix instead of a more complete validation at a higher level. Such an approach to fixing coding errors may result in redundant validations, increased chaos, decreased performance, and general problems with code management.

Types of Vulnerabilities

During security code reviews, try to maintain an attacker's point of view and look at everything that could be in the attacker's control. Conducting code reviews should not be limited to looking for dangerous APIs and data-copy operations. When it comes to security, the details are significant and history shows that the smallest flaws can be exploitable. This section contains examples of some types of coding errors, but there is no way to cover all of them here. The "Security Resources" sidebar will lead you to more comprehensive sources of information on likely vulnerabilities.

The general concept of security code vulnerabilities is often associated with buffer overflows. In the context of processing untrusted data, coding errors related to range and data type are still a very common source of security problems. This group of vulnerabilities is not, however, limited only to buffer overflows. The buffer overflow condition is connected with copying data to the stack, data segments, or the heap without concern as to their size, which may lead to writing beyond the defined range and hazardous situations resulting from potential overwrites of structures controlling program flow or other sensitive data.

Equally common and severe examples of coding errors related to range validation are out-of-buffer reads and writes. These specific problems often happen when using pointers received from a untrusted source (using pointers as cookies or handle values) or miscalculated indexes and offsets. They allow attackers to access arbitrary places of process memory such as data variables or function pointers and cause changes in program behavior including execution of arbitrary code.

In the scope of calculations, many transformations on numeric type variables (addition, subtraction, multiplication, and so on) may cause the calculation to leave its defined range and silently wrap, leading to integer overflows or underflows. This becomes a problem when the variable is used after transformation and at the same time somewhere else it is used either untransformed or transformed in a different way. As a result, existing checks may turn out to be insufficient, leading to buffer overflows or out-of-buffer reads and writes.

Other problems are related to the sign of numeric type variables. There are many abstract entities that a program deals with that have no defined meaning for negative values. For example, string byte lengths or array element counts do not really make sense as negative values—and using signed types to manipulate these quantities introduces a number of problematic cases.

Type casting (explicit or implicit) is another example of potentially unsafe transformation of data elements. Casting may make items bigger (in terms of bits) and hence may involve the propagation of signed bits into new values. Likewise, type casting could also cause the truncation of a value. Again, problems typically surface when the data item is used in a transformed state in one part of the program while used in an untransformed or differently transformed state in another.

There are specific security code vulnerabilities related to using dynamic memory. Program behavior with uninitialized data elements may be undefined. An attacker may be able to control some uninitialized variables by calling APIs to get or set the uninitialized variables prior to the API that initializes them. Returning uninitialized memory to an attacker can cause a number of problems. Since heap allocations come from a resource that may be shared by different threads servicing different security contexts (clients), sensitive information can leak from one client to the other. This may also provide an attacker with information useful in a broader exploitation of other security flaws, such as aiding the attacker in predicting addresses. Other problems may be related to double-freeing memory, using memory that was already freed, memory leaks, or memory exhaustion. The possible ramifications vary from denial of service and information disclosure to execution of malicious code.

A quite different group of security code vulnerabilities is related to synchronization and timing errors. Good examples are TOCTOU (time-of-check, time-of-use) race conditions. Security and sanitization checks are worthless if performed on data or resources that may be changed by an attacker after the check but before the actual use. Validation has to be performed on private copies of data that cannot change asynchronously. Resources need to be referenced properly to ensure they don't get deleted or replaced.

Multithreaded environments put especially strong requirements on code (services, kernel components) in terms of synchronization of access to shared objects and resources. Programmatic interfaces that allow attackers access to concurrent function calls or their asynchronous cancellation open a window for possible race conditions. Problems caused by missing locks, dropping locks and assuming preserved state, misuse of interlocked operations, or using a disjointed set of locks can lead to unsafe memory operations or deadlocks.

In the case of object lifetime management problems, if code makes a new object available to other threads by manipulating global data (publishing), it needs to be in a consistent (usually completely initialized) state. Proper managing of the lifetime of an object is determined by a proper reference counting mechanism. Broken object destruction and cleanup logic may give an attacker a way to unload code early (such as a plug-in) and free memory that is still in use.

Many security coding errors are not directly related to manipulations on untrusted data, but rather to its actual interpretation or its influence on program behavior and results. The classic examples are found in injection attacks that may occur when data submitted by an attacker is used to parameterize some other content such as a script, Web page, command line, or format string. By using special escape characters or control sequences, an attacker is able to force untrusted data to be interpreted as part of an executed script or a command. Problems arise also when untrusted data is used to construct names and paths to resources that are about to be created or used (files, sockets, registry, shared sections, global objects). Directory traversal or canonicalization issues make code vulnerable to all sorts of redirections that may result either in usage of resources that are under attackers' control or disclosure of sensitive information (for example, user secrets or network credentials).

Security vulnerabilities may also result from faulty assumptions about the data origin and destination (client id, network address, session id, or context handle), the order in which it is coming (network messages, API calls), and the amount (too much or too little data). Often, an attacker is able to control, to some extent, the environment in which code executes by creating named objects before the genuine product creates or uses them (name squatting attacks), filling up the disk space, or either blocking or redirecting network communication. Failure to handle these situations may make code vulnerable to elevation of privilege or denial-of-service attacks. Other common problems result from making assumptions about the technologies used by the code—either about their security guarantees (integrity of the communication channel) or about the inner workings of their APIs (providing incorrect combination of parameters and not checking return values instead of honoring API contracts).

Results from code reviews are often not limited to finding problems classified as code-level problems. Reviewers operate on the level of the actual implementation, and although they can benefit from using design-level documentation, including specifications and threat models, they are not supposed to rely on this data. Thus, code reviews still lead to identification of design-level issues or inconsistencies between the specification and implementation. The types of problems commonly found include exposure of dangerous functionality, incorrect implementation of protocol, usage of custom pseudo-security mechanisms (such as authentication schemes and access checks), or identification of ways to bypass security barriers.

Processing Results

A security code review cannot be considered successful if the security of a product is not improved. Code review should provide output that is useful for development teams to make meaningful changes to a product. Achieving this review-based improvement is connected with the two practical requirements of complete documentation and accurate triaging.

Documentation for each of the identified security code vulnerabilities should contain all details needed to locate and understand the issue. The details should include pointers to flawed code, an explanation of the problem, and justification for why this is a vulnerability. Adding recommendations for a fix is a useful practice, but selecting and preparing the actual solution is the responsibility of the code owners. If any data is missing or it is not clear why a coding error is a security vulnerability, additional research is required from the product team, which is not always possible due to resource or time constraints.

Another important requirement relates to accurate triaging of code vulnerabilities. If the severity of a problem is set too low, it may not be fixed by a product team. On the other hand, if severity is set too high, the problem may be selected for fixing instead of another problem with higher practical impact. The triaging process strongly depends on the quality of the security bug threshold, but also on an understanding of the priorities of code reviewers (who do the triaging) and developers (who act on the results of triaging). As we mentioned earlier, triaging security code errors should not be affected by the availability of exploitation prevention mechanisms.

Wrapping Up

Code reviewing efforts can provide more useful information than just a list of security problems. If possible, reviewers should also document code coverage, confidence in specific code areas, and general recommendations for code redesign and cleanup. Code reviews are also unique opportunities to enrich organizational knowledge, increase security awareness, and improve the effectiveness of security tools. Last, but definitely not least, development teams can use the results of code reviews to help prioritize future product security efforts.

We guarantee that software will always have security vulnerabilities, although the nature of those vulnerabilities and practical impact will change with time. Automated security tools are able to identify more and more coding errors, but some vulnerabilities will still be missed (either not detected or hidden under large numbers of false positives). Manual source code analysis is not a replacement for these tried-and-true tools, but it can often be advantageously integrated with them.

The manual code review approach is expensive, difficult, and highly dependent on the experience and commitment of the participants. However, in many situations the project requires this investment in order to obtain acceptable confidence about the security of a product or its critical components. An experienced human reviewer is still capable of identifying issues that would be missed by tools. As long as a human can be the cause of security problems, a human should also be a part of the solution.

Courtesy: http://msdn.microsoft.com/hi-in/magazine/cc163312(en-us).aspx#S1

Monday, January 19, 2009

Some Spritiual Words

दुःख सुख जो रुलाते है और हंसते है, यही अंधकार की जड़ है ! दुःख में चीखना चिल्लाना प्रभु के विधान में असंतोष व्यक्त करना है ! भक्त्ति योग में स्थित भक्त्त तो प्रत्येक स्थिति में प्रभु इच्छा मान कर संतुष्ट रहता है ! प्रभु से मांगो मत ! तुम्हारे मांगने का अर्थ है कि प्रभु जानते नही! तुम्हरी परिस्थिति में प्रभु जो कर रहे है वे तुम्हारे कल्याण के लिए ही कर रहे है !

To consider the worldly happiness & suffering real is the cause of ignorance. People who complain in testing times are actually interfering in lord’s plan. A devotee should accept all the situations as a blessing of the lord. Do not ask anything from Lord. Asking from Lord indicates that Lord does not know about your situation. Keep on remembering him in faith that whatever He is doing is in your benefit !!

Tuesday, January 13, 2009

Five Strategies for 2009 IT gold

Let’s talk about running successful IT projects in 2009. This discussion is more important than ever, because IT problems remain common, with some estimates suggesting 68% of projects fail. Despite staggering odds, follow these five strategies to reach the IT pot of gold.

1. Meet business needs. Every IT project must accomplish a business goal or risk becoming a wasteful boondoggle. Poor communication between business and technology groups complicates this simple concept inside many organizations. If the business side routinely criticizes your IT team, get together and ask them for guidance. While isolation brings failure, discussion is a true harbinger of success. Conversation with the business is the right place to begin an IT improvement program for 2009.

2. Innovate. Conversations with the business should help both sides work together with greater creativity and flexibility. Adaptability is fundamental to survival, especially in tough economic times, so being ready to accept change is prerequisite for success. Although listening carefully to user requirements is the first step, being self-critical as an organization is also necessary. Great things happen when IT embraces a culture of continuous change and improvement.

3. Be honest. Denial is the handmaiden of failure and a leading cause of project death. Change is impossible until a team accurately recognizes its own weaknesses. Having done so, the team can take remedial measures that shore up weaknesses and support strengths. Objective self-appraisal is the hardest item on this list to accomplish; few organizations do this well.

4. Align vendors. Virtually all projects involve the IT Devil’s Triangle: the customer, technology vendor, and services provider. “These groups have interlocking, and often conflicting, agendas that drive many projects toward failure.” Given the great importance of these relationships, success depends on managing the vendors to your advantage. Use contractual incentives and penalties to ensure external vendors operate with your best interests in mind.

5. Arrange sponsorship. Many IT initiatives go across political boundaries within an organization. For these reasons, gaining consensus among participants and stakeholders is sometimes hard. Since problems inevitably arise, a strong executive sponsor is a critical success factor on all large projects. Make sure the sponsor fully understands his or her role and is committed to active participation. The best sponsors care passionately about the project’s goals. Conversely, sponsors who don’t play an appropriate advocacy role when needed can kill an otherwise healthy project.

These five points cover relationships between IT and its environment, which includes internal stakeholders and external partners. It also addresses culture and process, bringing together essential ingredients to overcome many problems that plague IT.

What do you think is the best path to achieving successful IT in 2009?

Saturday, January 10, 2009

Don’t Quit...........

These are the words of my grand pa he used to tell me these lines whenever I used to feel depressed, so thought that I shall share it with you all.....

When things go wrong as they sometimes will;
When the road you’re trudging seems all uphill;
When the funds are low and debts are high;
And you want to smile, but you have to sign;
When care is pressing you down a bit…rest if you must

BUT DO NOT YOU QUIT

Success if failure turned inside out;
The silver tint of the clouds of doubt;
And you never can tell how close you are;
It may be near when it seems a far;

So stick to the fight when you are hardest hit…..
It’s when things go wrong, that you must NOT QUIT

Believe in Yourself

If you think you are beaten;
You are……………………
If you think you dare not;
You don’t…………………
If you’d like to win, but think;
You can’t…………………
It’s almost a cinch you won’t

If you think you will lose;
You’re lost………………

For out in the world we find………SUCCESS BEGINS WITH; A fellow’s will……………

It is all in the state of mind
Life Battles, don't away go to the
stronger or faster man...........
But sooner or later the man who wins......

IS THE ONE WHO THINKS HE CAN!!!!!

So friends Just Believe in your self and touch the sky

Thursday, January 8, 2009

Web-based operating systems

Want to see what lies ahead in the world of operating systems? Head to the Web. That's where you'll find some workable examples of operating systems that move everything- applications, files, and communications- from the confines of your desktop to the more widely accessible Internet.

And mind you, Web-based operating systems are more than just a collection of applications that run within a browser. They're self-contained environments in which you can create and store documents, copy files from one folder or drive to another, and conduct communications.

In short, almost everything you can do from Windows or the Mac OS should be able to be accomplished within a Web OS. All you need is a Web browser to get there. Here's looking into some options.

While the major players in the software industry are not yet among those with Web-based operating system (OS) prototypes, it's clear that the big names are paying attention- and making plans.

Google's Chrome, with its Spartan interface- largely devoid of visible menus, button bars, and status panels, easily reminds one of the basis of an operating system when it's expanded to full screen.

And Microsoft, although deriving a large portion of its revenue from the lucrative desktop applications business, has just announced that it will create Web-based versions of its Microsoft Office applications- and make them available for free.

eyeOS

A good place to start in your discovery of Web-based operating systems is eyeOS (http://eyeos.org), which is free, open source, and very easy to sign up for. There's no need to install anything to use eyeOS.

Simply sign up with a user name and password to create an account, and from that point forward, you have an operating system on the Web, accessible from any browser. eyeOS creates space on its servers to store your operating system settings and any files you create.

eyeOS resembles contemporary desktop-bound operating systems.

There's a workspace area- or desktop- along with icons on the left that represent shortcuts to applications, including a word processor, calendar, contact manager, RSS feed, and a trash bin.

Fire up the eyeOS word processor and you'll find yourself in a serviceable document creation tool, replete with toolbar buttons for most of the formatting tasks that users require today.

Documents you save are stored on eyeOS's servers by default, so there's no local storage involved. You can, if you choose, download the files you create to your own PC and upload files to your eyeOS environment.

The beauty of a Web-based environment, however, is that you can shut down your browser -- and thus your eyeOS operating system -- on one machine, launch a browser on another machine in another location, and then launch your eyeOS desktop again.

eyeOS even remembers all of the applications and documents you were last working on, so the workspace you see is exactly the one you left off with.

A green eyeOS button at the bottom middle of the screen is analogous to the Windows Vista Start button.

It contains shortcuts to system settings, applications, and a few other commands, including Close Session. Enter System Preferences, and you'll see some impressive customisation options, including the ability to change the theme, or look, of eyeOS to resemble Vista, Ubuntu, Gnome, or other operating systems. The one glaring omission from eyeOS is an e-mail client. Apparently you're expected to bring your own e-mail.

G.ho.st

G.ho.st (http://g.ho.st/) is in some ways even more full-featured and certainly more colourful- than eyeOS. After you sign up, for free, G.ho.st carves out an impressive 5 gigabytes of file storage on its servers for you, and it creates your very own G.ho.st Mail e-mail account, with 3 gigabytes of storage.

Like eyeOS, there's nothing to install. Once you sign up, you'll receive a confirmation e-mail message. Click the activation link inside, and you're ready to go.

The first time you launch G.ho.st, your browser will switch to full-screen mode so that you can see all there is that G.ho.st has to offer. There's a full-featured word processor, spreadsheet, e-mail, your personal G.ho.st drive for file storage, instant messaging, and even a few games.

There's also plenty of hand-holding in G.ho.st, as well, with icons that offer to take you on a tour of G.ho.st, help you set up your e-mail, create and edit documents, and upload files from your desktop computer to your G.ho.st environment.

A Go button in the lower left-hand corner of the G.ho.st screen mimics Vista's Start button; it provides handy access to all of the operating system's features and programs.

G.ho.st is full of glitz and color, and it is consequently more demanding of your hardware and somewhat more sluggish than eyeOS, which is streamlined by comparison. Still, many will likely find that G.ho.st's friendliness will make any performance hit worthwhile.

Desktop Two

Desktop Two (http://desktoptwo.com) is a java-based Web operating system that's the quickest of all to set up and get going.

After a brief sign-up routine, the desktop loads, and you're ready to start exploring.

Desktop Two offers more applications that allow users to create their own presence on the Web than the other major Web operating systems. Along with a word processor and e-mail program, Desktop Two provides a Web site editor and a blogging programme.

The blogging application, in particular, is impressive, providing a two-click entry into the world of setting up and maintaining your own blog.

Once you create your first blog entry, the programme provides you with the Web address that you can distribute to the world so that others can visit your blog on the Internet.

Desktop Two's conventional applications are less impressive, however, in part because the operating system was not always able to save documents to Desktop Two's online storage system.

Why Web-based OS

One could argue that a Web-based operating system is redundant, since one needs a computer, operating system, and Web browser to access an online operating system.

While that's true, the point of an online operating system is complete environment portability.

That means being able to log on to any computer that has an Internet connection and, in the time it takes to launch your Web OS, having all of your applications and documents ready for you to resume work.

Although you could cobble together many of the elements of a Web OS by using, say, Google Docs, Yahoo Mail, and other online applications, doing so would require you to make several stops around the Internet.

There's no doubt that today's Web-based operating systems are far from feature-laden, and they probably will not tempt many to abandon their current routine that combines desktop and Web-based software.

But given the push that the major players in the industry are making toward a completely Web-based future, there's also little doubt that Web-based operating systems, or some form thereof, are in our collective future.