Friday, December 30, 2011

Let the clock talk

Software Testing is the art of ideas. Thinking in the clock’s approach is forever superior. Clock is the only thing which always thinks about ‘Time’, as its profession is that. As a tester, we involve in most of the time linked procedure. They are;


  • Time to investigate a new feature.
  • Time to analyze a requirement.
  • Time to acknowledge a design page.
  • Time to write a test plan.
  • Time to carve-in a test case.
  • Time to carry-out a test case.
  • Time to re-test a test case.
  • Time to write down a test report.
  • Time to release the product.
  • Time to take rest.


5 Whys in Software Testing

I hope all have heard about the 5 why methodology in general for a project. How it can be implemented in Software Testing?

5 Whys in Software Testing? What it intended for us? How does this be relevant to testing?

After production deployment, imagine that someone initially in your team locates a bug, your Project manager asks why this was not found earlier but he/she will tell someone to fix it then that issue will not be remembered as how it occurred. While focusing on a theme/topic, keeping questions in mind and assuming things is the dangerous segment of the requirement/development/testing phase. We hardly ask why an issue was reopened, where the issue was placed and whether some additional issues will occur because of the hot fixes. We need to get rid of the bugs. A good ‘Unit Testing’ could be a solution in this kind.


Conclusion: Keep asking WHY for most of the answers until you are satisfied. You will be satisfied with 4th/6th why. But try to avoid bizarre questions.

Have a great testing day.



Thursday, December 29, 2011

Software Tester - Best Practices

As a Software test engineer, our important task is not only writing the test plans/test-cases & filing the bugs. We have few more apexes to be balanced with our testing effort. They are as follows;

(a)Performing a domain knowledge transfer (DKT) to the team with regards to the data collected.
(b)Analyzing how the issues can be sorted out in the code level review for application performance.
(c)Ordering test strategies & test selection techniques ought to be prioritized.
(d)Reviewing UX/UE documents.
(e)Scheduling the prerequisite for a project before the requirement scrutiny starts.
(f)Sheltering the success with the testing process/methodology which is been applied.


Monday, December 26, 2011

CAR in IT!!!

Question: What is CAR?
Answer: Chuck Away the Requirements.

- Understanding of the proposed requirements.
- Pointing out the gaps and raising queries.
- Quick assessment & certification of a feature.
- Determining if an idea is achievable or not.
- Test coverage for that feature.


Friday, December 23, 2011

Usecase Format and Example!!!

Usecases for the {Functionality Name}

--------------------------------------


UseCase Name: [Give a reasonable name for the usecase.] [Usecase Id]
[Sample Format >> Tab-name_functionality_Report] [Sample Id >> (UCTNFR001)]

Actor: [The users who are going to operate the product/application.]
[Sample Actors >> New User/Vendor/Manager]

Assumption: [Assuming the actor would have done those steps to execute this use case.]
[Sample Assumption >> User is a registered user.]

Description: [Precise synopsis about the functionality.]
[Sample Description >> Adding the item(s) in the shopping cart. {Present Tense}]

Steps:
[Steps to complete this functionality (don’t include the assumption{s} in the steps.)]
[Sample Steps >> {Present Continuous Tense}
1. Actor adds an item with cart ID, item ID and quantity.
2. System checks if the inventory is enough.
3. System updates the inventory level.
4. Updated shopping cart contents and total price are displayed.]

End state: [Need to put in the picture like Actor finished that functional procedure successfully.]
[Sample End State >> User adds item{s} to the shopping cart.]


The below list is an extra to a normal Use-case;


Negative Flow: Apart from the primary flow/primary steps, if there is any Wireframes(if available both for positive and negative flows) & negative steps; that can also be mentioned. 

[Sample Negative Flow >> If user clicks on 'Cancel' button, then application navigates to Home page.]

Note: Make sure the flow/steps is complete/sensible when other than you reads it.

Thursday, December 22, 2011

Standard Bug Post will have?

Bug Summary: [Summary of what happened.]
Bug ID: [Hope most of the tracking tool will automatically create by itself.]
Area Path: [Where the issue is present?]
Build Number: [Version Number which you get in mail from Dev Team.]
Severity: [Text, Tweak, Minor, Major, Crash & Block.]
Priority: [None, Low, Normal, High, Urgent & Immediate.]
Assigned to: [Developer-XYZ]
Reported By: [Hope most of the tracking tool will automatically pick by itself as you would have logged-in.]
Category: [Design Issue, Functional Bug, JavaScript Error (Web app), Added Feature & On Hold.]
Status: [New/Assigned/Resolved/Reopened/Acknowledge/Close] (Depends on the Tool you are using)
Environment: [Windows XP/SQL Server 2005]
Computer Resolution: [Check with your monitor resolution.]
Description: [Precise explanation about the bug.]
Steps To Reproduce: [Optional, when Description is understandable.]
Expected result: [What the exact functionality need to do.]

Tuesday, December 20, 2011

Software Testing Reviews

Inspection: It is a more methodical and careful type of peer review. Inspections are more effectual at discovering defects than the informal reviews.

Pair Programming: In Pair Programming, two developers work jointly on the same program at a single workstation and constantly review their work.

Pass About: It is a multiple - parallel check where several people are invited to offer observations on the product.

Peer Desk test: In Peer Desk check only one person as well as the manager examines the work product. It is a casual review where the reviewer can use defect checklists and some investigation methods to increase the efficiency of the code/product.

Team Reviews: It is an intended and structured approach but less official and less fussy comparing to Inspections.

Walkthrough check: It is an informal review because they typically do not follow a distinct procedure, do not denote exit criteria, require no management coverage and produce no metrics.

Responsibilities As Who?

Testers & Test Lead

- Recognize the Application under Test.
- Get ready for a test strategy.
- Help out with preparation of test plan.
- Design high-level sections.
- Build up test scripts.
- Realize the data implicated.
- Perform all the assigned test cases.
- Record the flaws in a defect tracking system.
- Retest fixed defects.
- Assist the test leader with his/her tasks.
- Provide advice on defect triage.
- Computerize test scripts.
- Understanding the SRS.


QA Manager

- Preparation of System Test Plan.
- Structuring of the Test Team.
- Programming the test preparation.
- Module distribution.
- Walk through on Test Process.
- Client relationship management.
- Verify the Status information.


Project Manager

- Preparation of SRS.
- Configuring the Development Team & Test Team.
- Management of necessities throughout the project life cycle behaviors.
- Research on Detailed Design Document.
- Analysis of Unit Test cases and Integration Test cases.
- Guidance on programming and linked coding conventions & principles.

Thursday, November 17, 2011

How to be with Development Team?

1) Be neutral with your views.
2) Distribute the bitter flavor in a sweet-coated capsule.
3) Aim to go up against your valuable points at the requirements phase.
4) Sustain a sugary language while discussions.
5) While partying, involve freely with them.
6) Pointlessly, don’t exchange a lot of rigid expressions.
7) In vital circumstances, administer the stuffs in an apt way.
8) Sharing the sensible thoughts (or) suggestion.
9) In a roundabout way, shoot the blunders committed by them.
10) Make an effort to avoid miscommunication & get a good name from them.

Wednesday, November 16, 2011

Response Time, Throughput and Utilization

(a) Response Time
Response Time is the delay experienced when a request is made to the server and the server's response to the client is received. It is usually measured in units of time, such as seconds or milliseconds.

(b) Throughput
Throughput refers to the number of client requests processed within a certain unit of time. Typically, the unit of measurement is requests per second or pages per second. From a marketing perspective, throughput may also be measured in terms of visitors per day or page views per day.

(c) Utilization
Utilization refers to the usage level of different system resources, such as the server's CPU(s), memory, network bandwidth, and so forth. It is usually measured as a percentage of the maximum available level of the specific resource. Utilization versus user load for a Web server typically produces a curve.

Alpha, Beta & Gamma

Alpha Testing: Alpha Testing is mostly like performing usability testing which is done by the in-house developers who developed the software or testers. Sometimes this Alpha Testing is done by the client or an outsider with the presence of developer and tester. The version release after alpha testing is called Alpha Release.

Beta Testing: Beta Testing is done by limited number of end users before delivery, the change request would be fixed if the user gives feedback or reports defect. The version release after beta testing is called Beta Release.

Gamma Testing: Gamma Testing is done when the software is ready for release with specified requirements. This testing is done directly by skipping all the in-house testing activities.

Thursday, November 10, 2011

Prologue >> Automation Testing

What is Automation Testing?

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Test automation involves automating a manual process already in place that uses a formalized testing process.

Benefits of Automation

Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error.

Repeatable: You can test how the software reacts under repeated execution of the same operations.

Programmable: You can program sophisticated tests that bring out hidden information from the application.

Comprehensive: You can build a suite of tests that covers every feature in your application.

Reusable: You can reuse tests on different versions of an application, even if the user interfaces changes.

Better Quality Software: Because you can run more tests in less time with fewer resources

Fast: Automated Tools run tests significantly faster than human users.

Cost Reduction: As the number of resources for regression test are reduced.

Other: Test without any human interaction


Why Automation is required?

Ø Reducing test time and resources

Ø Consistent Test Procedures

Ø Ensures process repeatability and resource independence

Ø Eliminates errors of manual testing

Ø Improves efficiency of testing

Ø Reducing testing costs

Ø Gives consistent and accurate results


When to automate?

Ø Regression testing

Ø Data-driven testing

Ø The Application under manual test is stable

Ø Application which has long runs

Regression Testing: Regression test suite is the best candidate for automation. If testing only needs for short period projects, it should be done manually. If the test suite needs to be run for a long time, regularly for every iteration, build and after bug fixing, then it should be automated. Always automate Smoke test cases then Sanity test cases and then regression.

Data-driven testing: Single test to verify multiple data like want to test application with various users.

The Application under manual test is stable: We cannot automate application which is unstable i.e when the functionality of application keeps on changing.

Long Run Projects: Long run projects are good to go with automation. Automating short duration project does not make sense. I personally go for automation for those projects whose duration is more than 6 months.


When to NOT Automate?

Ø Functionality of the application changes frequently

Ø Project doesn’t have enough time

Ø Test with unknown results cannot be automated


Wednesday, November 9, 2011

Smoke Vs Sanity

Smoke

Sanity


1

Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.


A Sanity test is a

narrow regression test

that focuses on one or a

few areas of functionality.

Sanity testing is usually

narrow and deep.

2

A Smoke test is scripted--either using a written set of tests or an automated test.

A Sanity test is usually

unscripted.


3

A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide.

A Sanity test is used to

determine a small section

of the application is still

working after a minor change.


4

Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).

Sanity testing is a cursory

testing; it is performed

whenever a cursory testing

is sufficient to prove the

application is functioning

according to specifications.

This level of testing is a

subset of regression testing.


5

Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

Sanity testing is to verify

whether requirements are

met or not,

checking all features

breadth-first.

Static Testing Vs Dynamic Testing

Static Testing

The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspections and Walkthroughs are static testing methodologies.

Dynamic Testing

Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

For example: Inspecting a car without running is the static testing while test driving a car is the dynamic testing.

Buggy Application (or) Software ?

Question: Is your "Software/Application" diseased?

>>> Then there are a few causes for unhealthiness. They are as follows;

  1. Breakdown in informational exchange.
  2. Unrealistic development time-frame.
  3. Deprived design logic.
  4. Feeble coding ethics.
  5. Lack of version control.
  6. Less focus on documentation maintenance.
  7. Infectious third-party devices.
  8. Less attention on software repairs.
  9. End-line Alterations.
  10. Individual Traits.


URL(another place where my blog got posted) :: Application-Diseased

Tuesday, November 8, 2011

5 C's of Software Testing Techniques

Context – Context is King of the testing cards.

· Software Testing is always context dependent.

· In general, for jointly operating environment, we need to have a clear context.

· When people work together, the main segment is the “Project’s Context”.

· It desires to be perfect. No matter what the format is, if context is not appropriate, all will lead to failure / fault.

Consistency – Consistency is Queen of testing cards.

· The consistency in Software Testing can be viewed as application’s effectiveness. It is consistent user experience.

· A consistent tester is a tester, who has a well-versioned test cases and test suite.

· Testing is a steady approach which helps the planning better.

· The text’s position and fonts in an application needs to be consistent in all the browsers.

Correctness – Correctness is Jack of testing cards.

· Correctness is the essential purpose of Software Testing.

· Correctness is to tell the right action from the wrong one.

· Validating the correctness with various inputs is what inspired to be testing.

· Primary correctness should take place as earliest possible which leads testing as uncomplicated.

Communication – Communication is Joker of testing cards pack.

· Communication plays a vital role in Software Testing.

· A tester should have strong communication skill, since your points need to be noticed.

· You should be able to express your views effectively with regards to the bug tracking.

· Communication operates as a chief mediator in traceability matrix as its aiding customer relationship management and commitment negotiation.

Company (or) Community – Company (or) Community is Ace of testing cards.

· Testing Communities/Testing Companies moves the Software Testing to next level.

· Through Communities and Companies, the students from schools & colleges know how important and elegant the Software Testing is. We can see 'N' number of individual companies for Software Testing nowadays.

· The Software Testing market offers huge opportunities for both pure plays testing as well as IT service companies.

· Testing companies & communities are initiating more conferences to make Software Testing the center of attraction in IT.



URL(another place where my blog got posted) :: 5 C's of Software Testing

Automation & Mind

Automation testing started. What are all in my (your) mind?

· Is application stable?

· Need to analyze the requirement.

· What are the pages/modules involved.

· What manipulation happens to the data?

· What kind of framework need to be implemented?

· Need to improve in-depth knowledge of the domain.

· Will my framework applicable for web application?

· What are the field validations?

· Where is information stored in the database?

· Questions related to object handling in the application.

· Are the business rules that manage the process?


Epitome of Testing

For those who are not familiar with the word ‘Epitome’, it’s nothing but “Essence”.

As a person in a QA/Tester role there are a number of typical roles that can be approved.

Sheriff – Bringing order to the Wild West. Chaotic development processes are roped in and consistency and discipline instilled in the vacuum.

Cheerleader – Pumping the team up to be more than they are. ‘Better unit tests? Way to go’

Cop – More or less the opposite of the Cheerleader. While typically you get more flies with honey, sometimes you do have to bring out the vinegar.

Negotiator – An active role in figuring out the trade-offs between development and test.

Marketer – A longer-term variation of the Negotiator which uses more subtle techniques for furthering agenda.

Coach – Strategist and morale support.


Testing is Hyped


• Good QA testers are worth their weight in gold.

• Unit testing finds certain errors; manual testing others; usability testing and code reviews still others.

• The peer pressure of knowing your code will be analyzed on providing higher quality.

• Another huge problem with developer tests is that they won’t tell you if your software sucks

• No single technique is effective at detecting all defects.

• Testers are a developer’s editor.


OSDLC

MaximuM ExposurE (M2M - E2E)
OSDLC [Original Software Development Life Cycle]

1. Development team constructs a code that they believe it’s a bug-free.

2. The build is tested and 15 bugs are recognized by testing team and 5 are known issues from the dev team.

3. Programmer resolves 10 of the bugs and clarifies to the testing team that the other 5 aren't actually bugs.

4. Testing team finds that 5 of the fixes didn't work as anticipated and notices 15 fresh set of bugs.

5. Recursion for steps 3 and 4.

6. Since there are various marketing demands and a really untimely product declaration based on overly-optimistic programming schedule, the product is released by pushing the code into the production instance.

7. Users locate 125 new bugs which has some domain issues which we never consider as a bug.

8. Original developers are cashed for their great work, is nowhere to be found in that office surroundings.

9. Newly accumulated programming group fixes almost all of the 125 issues but brings in 350 new ones.

10. Original developers more often visits that product’s testing team and enquires about the good and bad things about that product. [At some point, the entire testing department quits.]

11. The company might be traded by the competitor using earnings from their latest release, which had 500 bugs.

12. New CEO is taken place in by the board of directors. He employs a set of programmer to recreate the product from the square-one & an automation test groups for regression process.

13. New Development team constructs a code that they believe it’s a bug-free.



Thursday, October 13, 2011

Performance Testing vs Load Testing vs Stress Testing

PERFORMANCE TESTING
It is performed to evaluate the performance of components of a particular system in a specific situation. It very wide term. It includes: Load Testing, Stress Testing, capacity testing, volume testing, endurance testing, spike testing, scalability testing and reliability testing etc. This type of testing generally does not give pass or fail. It is basically done to set the benchmark & standard of the application against Concurrency / Throughput, Server response time, Latency, Render response time etc. In other words, you can say it is technical & formal evaluation for responsiveness, speed, scalability and stability characteristics.


LOAD TESTING

It is subset of performance testing. It is done by constantly increasing the load on the application under test till the time it reaches the threshold limit. The main goal of load testing is to identify the upper limit of the system in terms of database, hardware and network etc. The common goal of doing the load testing is to set the SLAs for the application. Example of load testing can be:

Running multiple applications on a computer simultaneously - starting with one application, then start second application, then third and so on....Now see the performance of your computer.

Endurance test is also a part of load testing which used to calculate metrics like Mean Time between Failure and Mean Time to Failure.


Load Testing helps to determine:

  • Throughput
  • Peak Production Load
  • Adequacy of H/W environment
  • Load balancing requirements
  • How many users application can handle with optimal performance results
  • How many users hardware can handle with optimal performance results.


STRESS TESTING

It is done to evaluate the application's behavior beyond normal or peak load conditions. It is basically testing the functionality of the application under high loads. Normally these are related to synchronization issues, memory leaks or race conditions etc. Some testing experts also call it as fatigue testing. Sometimes, it becomes difficult to set up a controlled environment before running the test. Example of Stress testing is:

A banking application can take a maximum user load of 20000 concurrent users. Increase the load to 21000 and do some transaction like deposit or withdraw. As soon as you did the transaction, banking application server database will sync with ATM database server. Now check with the user load of 21000 does this sync happened successfully. Now repeat the same test with 22000 thousand concurrent users and so on.

Spike test is also a part of stress testing which is performed when application is loaded with heavy loads repeatedly and increase beyond production operations for short duration.
Stress Testing helps to determine:

  • Errors in slowness & at peak user loads
  • Any security loop holes with over loads
  • How the hardware reacts with over loads
  • Data corruption issues at over loads.

Sunday, May 8, 2011

-Se7en Deadly Sins-


Lack of "Lust for finding Defects" Lust could be an objectionable vice in the Bible, but in the "Bible of Software Testing", lust is a good thing; lust for finding defects that is. Have a craving, appetite, or great desire towards finding defects is something that differentiates a great tester from that of a mediocre one. Once this lust dies down inside a tester’s heart, it would be very difficult to keep going.

Having said this, I do realize that there could be times like the "tester’s block syndrome" [a condition, associated with testing as a profession, in which a tester may lose the ability to find new bugs and defects in the software that (s)he is testing]. It can happen with anybody. But don’t let it become the end of the world for you. If you are struggling to find bugs in the software and feeling burnt out, "change the way you have been testing" – adopt new test ideas, try new ways to find where the AUT (application under test) might be broken, try out pair testing, explore new unexplored areas of the AUT and even try taking a short break. And still, if nothing at all works... then change your AUT! I know how difficult it can be to change the AUT (and your project) in certain contexts. In such cases, try out new applications (there are tons out there begging to be tested; just look around) and once you start finding defects in the new AUT, it won’t be long before you would start discovering defects (again) in your old AUT.

Envy If you are in the field of testing then I can almost certainly bet that you must have come across testing teams where only few team members perform exceptionally well and the others instead of taking it as a reason of motivation rather feel envious about them. Enviousness and jealousy leads to hatred and hatred in turn takes you further away from the path to success.

Lack of "Greed for Knowledge" Like lust, greed also is a good thing to have for a software tester. Some call it the "burning desire to learn" and others call it "the passion to excel", but to me they all mean essentially the same thing. Once some great mind said -- "knowledge is wealth/money". And it couldn’t be agreed more for software testing. I believe that a tester should be like a "search engine king", who is a jack of all trades and the master of many! As a test manager I would want my testers to be knowledgeable in every aspects of computing -- knowledge about programming languages, operating systems, web services, technology updates, gadgets, search engines, scripting skills... everything counts as long as they help the team to be better at testing.

Sloth Laziness is not a luxury if you are in the software business; and the onus is even greater if you are a tester working in a tight testing schedule. In my opinion, this is one of the greatest sins a tester could ever commit – laziness in testing, laziness in learning new stuffs, laziness in updating your skills, laziness in showing interest in finding defects in what you’re testing... all can doom you and your career as a tester. So beware!

Wrath Numerous situations may arise in a tester’s life where (s)he would find her/him against the team of programmer. But anger and wrath are never the solution to such scenarios. Hate the defects, NOT the programmer. Criticize the software that we test, NOT the programmer who coded it. And don’t ever forget that to err is human and if there were no errors, there would be no need for us (testers) in the team. Being diplomatic and factual with a small dose of humility can do wonders in dealing with any such adverse situations; NOT anger/wrath.

Pride I can imagine how an occasional self-pat can help boost self-confidence and create room for the much needed motivation. But be careful NOT to overdo it and keep it at "occasional" level. Pride is probably the easiest gateway to witness failure and the feel-good factor associated with pride makes it even more dangerous.

Gluttony Yes, I said that greed is a good thing for you if you are in the profession of software testing. But greed is not same as gluttony (over-testing, excess testing)! Learning to know where to stop testing is a lesser known art. If you didn’t find any new defects in the past hour of testing, then perhaps you wouldn’t find any, even if you extend the testing session for another couple of hours! In such cases taking a much needed break is a wiser decision than to extend the testing session. Furthermore, every test project is associated with budget constraints and you probably wouldn't want to make your testing efforts look like a liability to the whole project instead of adding value, would you?



Courtesy : Some Blogger

Build a great brand experience

It’s really hard to build a brand. It’s hard to get the attention of others, it’s hard to get people onto your website, and it’s hard to create something that people will buy and use. We realized early on that the best visitors we get hear about us through word of mouth. Word of mouth is driven by happy people who have a great brand experience.

This is how we’ve focused on building a great brand experience:

  1. Trust : A great brand experience needs to establish trust between the business and its customers. We establish trust by giving surprisingly honest feedback to customers (such as sending them to a competitor if they’re not a good fit), making it easy for anyone to get in contact with us (by putting a phone number on our website), and focusing on coaching instead of selling.
  2. Do the work for the customer : We try to do as much work for the customer as possible. This means spending extra time designing a product to simplify the first time experience, asking for the least amount of information needed to solve a problem, and putting the onus on us to do the work.
  3. Creating a genuinely useful product.
  4. Surprise people with greatness : Give people unique and useful things that they’ll actually use. Give out the best quality t-shirt you can find instead of settling for the standard Hanes. Give people unique things they couldn’t get anywhere else. Give people something that will make them feel proud to support you.

Brand is everything. It’s every interaction with someone outside of your business. It’s your company culture. It’s your production process and the way you deal with a bug.

The secret to brand building is to start early and often. Your brand is not your logo or color scheme, it’s how people think about you. It’s the way that you represent yourself.

Wednesday, May 4, 2011

Bug Life Cycle


Fault, Error & Failure

Fault : It is a condition that causes the software to fail to perform its required function.

Error : Refers to difference between Actual Output and Expected output.

Failure : It is the inability of a system or component to perform required function according to its specification.

IEEE Definitions:

  • Failure: External behavior is incorrect
  • Fault: Discrepancy in code that causes a failure.
  • Error: Human mistake that caused fault

Note:

  • Error is terminology of Developer.
  • Bug is terminology of Tester.


Wanna be a Good Tester?

1) Programmers should not test their own code.
2) Go beyond requirement testing.
3) When Regression testing started, use the previous bug graph/Defect tracking tool.
4) Analyze code changes are done properly for testing purpose. If not don't accept the build.
5) Keep developers away from test environment but never hurt them.
6) Testers need to be right from software requirement and design phase.
7) Share your best testing practices/experience with your testing friends.
8) Think of positives and negatives while going into an application.[Technically and Non-technically]
9) Before testing, Learn to analyze the whats your plan.
10) Test cases needs to be available to developers prior to coding, so that they can't blame you.
11) Write clear, descriptive, unambiguous bug report.
12) Increase your conversation with developers so that you might get a new direction to test the application/modules.
13) Understand how programmers think and do the other way.
14) Ask them questions to client/developers/QA managers. It might be silly but don't hesitate. Some questions can be a turning point of your project/product.
15) Avoid keep all your communication in the mode of verbal.
16) Think like an end user. Listen to end user more and developers less.
17) Learn to say NO when quality is insufficient. Deliver with quality.
18) Increase the learning curve in automated test tool programming.
19) Know your application and domain.
20) Don't be diplomatic.