Design Patterns

I’ll be exploring Design Patterns in depth with an upcoming series.  For now take a quick look at the practical application described below from Wikipedia page link (http://en.wikipedia.org/wiki/Software_design_pattern)

Design patterns can speed up the development process by providing tested, proven development paradigms.[6] Effective software design requires considering issues that may not become visible until later in the implementation. Reusing design patterns helps to prevent subtle issues that can cause major problems, and it also improves code readability for coders and architects who are familiar with the patterns.

In order to achieve flexibility, design patterns usually introduce additional levels of indirection, which in some cases may complicate the resulting designs and hurt application performance.

By definition, a pattern must be programmed anew into each application that uses it. Since some authors see this as a step backward from software reuse as provided by components, researchers have worked to turn patterns into components. Meyer and Arnout were able to provide full or partial componentization of two-thirds of the patterns they attempted.[7]

Software design techniques are difficult to apply to a broader range of problems. Design patterns provide general solutions, documented in a format that does not require specifics tied to a particular problem.

Advertisements

Change Agents

Ran across this today on USA Today about Robert Brunner designer of the Beats headphones.  http://www.usatoday.com/story/tech/2013/12/30/robert-brunner-ammunition-beats-change/3497185/

Great short on how important the design experience is.  Nice tribute piece. And totally relevant to Software Desgin though the article is clearly about industrial design of physically held products like the beats headphones.

Design

By Design I don’t mean what can be found on http://design.org/ though that is a good start.  What I’m interested in starting to explore is the concept of good design in creating software user interfaces.  Like any good design ultimately it is the experience of the design that ultimately leads to delight and overall enjoyment of the experience of using a tool.   That is good design.   There is much more to be said and I will certainly follow up with a series of more blog posts.  For now I’ve run out of time for today.  Cheers.

Troubleshooting and Debugging

For any technologist these are key skills.  These are essential skills for all phases of the software development lifecycle and all members of a project need them to some degrees.  Even a non-technologist needs these skills for getting through life whether or not they are aware of it.   Essentially these two terms are are synonymous with the term “problem solving”.   We all learn to solve problems from an early age.  The problems of our childhood are more basic and fundamental such as how get something good to eat or find something fun to do but the skills learned then are very much the same only more sophisticated and the problems more difficult and as we grow into adults.   Some of the problems we face share commonalities and are pervasive across domains of expertise and some are isolated to specialized cases in a specific career.   However specific the problems we face as adults we must be able to learn to solve problems for others and deliver a satisfactory solution.  For technologists this involves solving problems with devices and software in a way that satisfies our boss, our customers and clients.    All over the web you can find references and resources to troubleshooting and debugging.  The list is pretty much endless.  Much of this is specific information to the tools and software being used.   Some of it is also general advice and general guidelines which is helpful for those perplexed and feeling overwhelmed. Of course some the best problem solving advice is given out in the community forums, chat rooms and blogs of the specific software or device platform that is giving the trouble.   Depending on the size of the user base the knowledge of a given piece of software and its set of commonly reoccuring flaws and failures can be very useful and those who are experts are very much appreciated.   A very important thing to keep in mind that is sometimes overlooked or forgotten as we discuss troubleshooting and problem solving abstractly is that not all expertise and troubleshooting ability is equally valuable.   The reality is that market forces and economics are at work and the law of supply and demand is in operation.   It is in what exactly we are able to troubleshoot and debug very well and deliver in the form that is needed to the stakeholders, customers, and those typically most interested the actual users of that device or piece of software that determines our compensation. This is a fairly basic concept: we are compensated most exceptionally well if we have skills and problems solving abilities that are in high demand.   In the technology world this means the platforms, devices, and software we are experts in represent competitive advantage for our employers, customers, or clients.   We must be able to distinguish ourselves as being able to provide something that is relatively unique and difficult for others to duplicate.    For a technologist this means developing to a high degree the problem solving, debugging, and troubleshooting skills in his area of chosen expertise.   Problem solving, debugging, and troubleshooting are learned skills that get better with experience and practice.  For developing the fundamentals there are many templates to work from.   The scientific method is a great place to start among many other.   Generally what I’ve found to work well are the following steps:

1.  Identify and Define the Problem

2.  Capture and document all the errors and symptoms of the problem

3.  Isolate the root cause

4. Get help from experts when needed

5. Come up with possible fixes and workarounds

6.  Apply a fix and resolve the issue

7.  Deliver the fix to the user base

8. Communicate the results to all stakeholders.

Some examples are definitely in order.   This post is definitely going to end up being a series as there is a lot of very valuable material that can be useful for improving troubleshooting and debugging skills.  Much of the best advice can be found in specific problems encountered in real world problems.

What is Software Quality Assurance?

Quality Assurance is often mentioned hand in hand when discussing the role of testing in the development process.   This is an entirely appropriate and correct attribution however Software Quality Assurance has been formally recognized to be a much larger practice and the areas it touches are much wider in scope than merely performing the test process.   The Wikipedia definition is great place to start.   See below: 

Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI.

SQA encompasses the entire software development process, which includes processes such as requirements definition, software design, coding, source code control, code reviews, change management, configuration management, testing, release management, and product integration. SQA is organized into goals, commitments, abilities, activities, measurements, and verifications.[1]

Dimensions of Software Quality

Software offers different aspects or dimensions of overall quality provided by the features and capabilities it delivers:

  • Accessibility: The degree to which software can be used comfortably by a wide variety of people, including those who require assistive technologies like screen magnifiers or voice recognition.
  • Compatibility: The suitability of software for use in different environments like different Operating Systems, Browsers, etc.
  • Concurrency: The ability of software to service multiple requests to the same resources at the same time.
  • Efficiency: The ability of software to perform well or achieve a result without wasted energy, resources, effort, time or money.
  • Functionality: The ability of software to carry out the functions as specified or desired.
  • Installability: The ability of software to be installed in a specified environment.
  • Localizability: The ability of software to be used in different languages, time zones etc.
  • Maintainability: The ease with which software can be modified (adding features, enhancing features, fixing bugs, etc)
  • Performance: The speed at which software performs under a particular load.
  • Portability: The ability of software to be transferred easily from one location to another.
  • Reliability: The ability of software to perform a required function under stated conditions for stated period of time without any errors.
  • Scalability: The measure of software’s ability to increase or decrease in performance in response to changes in software’s processing demands.
  • Security: The extent of protection of software against unauthorized access, invasion of privacy, theft, loss of data, etc.
  • Testability: The ability of software to be easily tested.
  • Usability: The degree of software’s ease of use.

There are in fact objective metrics that can be applied to determine each.   None of these are bound to subjective aesthetics although graphics and pictures can make a huge difference and go beyond simply enhancing to actually providing the bulk of the content and information that users are looking for.  It is vitally important to know what is considered the most valuable content and how good a job is in delivering that content.   This is the key to building in and maintaining software quality.

Content Credit for the list of dimensions is due the folks at http://softwaretestingfundamentals.com/dimensions-of-software-quality/

What is Sufficient Test Coverage for a Software Development Effort?

In any software development project testing is an integral part of deploying a working product that meets and satisfies the needs of the prospective user base.   Software by nature is a complex effort and is typically the product of attempting to automate and streamline workflows that may or may not already have other software applicaitons and implementations as part of those flows.   The goal of any software product should be therefore the creation of a product that introduces and supports a better and improved approach.   In some way it should be cheaper, faster,  more secure, more robust and in some way a higher quality product than what preceded it.   If nothing else it should offer a more complete and comprehensive solution that integrates well and works with other parts of the users overall workflow.   If it does none of these things then the purpose for the new software’s introduction is minimal or none at best.  The only possible excuse for the introduction of such a lack of improvement can be limited to one:  the new software is designed to work on new hardware that will replace the old.   Integration testing then becomes more critical to be sure that the new hardware and the other software and operating system designed to run on it introduce no conflicts.  

The main point here is that test coverage is primarily determined and dictated by a few important criteria:

1.  Users Workflows

2.  Features supported by the software

3. Configurations of the environment that the software will be running on.

4. Performance and Functionality Expectations

There are other concerns but these are the central concerns of testing and that of determining if the testing to be applied is adequate.

Generally any testing effort can rest on the fundamentals of computing that software has key functionality that must work properly before getting creative with the aesthetics of layout and UI design.  These basics boil down to the functions of input, processing, and output which all computer science students learn at the beginning of their studies.  For testing we are concerned if the software correctly and accurately captures the right input from the user in a consistently reliable manner.  If we can be sure the input is captured correctly then we move on to examining the processing mechanism to make sure it performs the stated function.  From a testers point of view often this is somewhat of what is called a “black box” and the only way to check on the function is enter the input data and submit it to the processing function which in turn returns or delivers the output data to the specified channels.   Once we know the correct set of data is output then we can focus on the particular type of format that is being delivered and make sure that that particular data is returned in a way that can be used by the user base.

These are the essentials and fundamentals of testing any software.   So far none of this stuff as described is at all that complex.    What starts to get complex are the specific methods by which to verify and validate that the software components of input, processing, and output are working properly.  This then rapidly moves the conversation to specific tools, hardware platforms, and type of networked environments.  All of these require some specific knowledge and expertise depending on the type of testing that is needed.  Today for most testing to be sufficiently comprehensive end-to-end integration testing is required.  This may or may not mean that the testing is performed by a single individual or a team.  The amount of testers required depends on the size of what is typically called the test matrix.   The test matrix is an abstract term that defines the scope and boundaries of the particular hardware and software components that must be supported to meet the requirements of a successful software delivery and deployment.  The test matrix determines the actual configurations that will be supported by the software for actual live users in the field or what is typically called production.

To satisfy sufficient test coverage of the test matrix the testers that perform their testing task employ a variety of methods and approaches to actual test execution.   Depending on the functionality and nature of the software and the components and UI (user interface) elements used testers will commonly use the following:

1.  Limit /Boundary tests

2.  Both Positive and Negative tests

3.  Computational Checks

4.  Variations of Input

5.  SQL Injection

6.  Variations of Sequencing Inputs

7.  User Role specific tests (tests for properly authorized scope

8.  Performance and Load testing

9. Regression Testing

The point of all the testing is to confirm the functionaly and the overall performance of the software.  This means that it is the testers job to confirm that the software performs as it is intended to as is specified in both the design and requirements and that in so doing will meet the intended and expected use by the user base and do it in such a way that it satisfies and meets or exceeds expectations for its defined purpose.

It should be clear then that if the requirements and design are specified clearly then there is no ambiguity.  In real life however the reality is not so simple.  Often times both the requirements and design are often not so clearly spelled out.   Overall this means a higher level of risk of failure for the project and a higher level of effort for not only the tester but everybody involved in the software deployment effort.   The testers role can often be that of helping to hammer out the design and requirments by highlighting the discrepancies and the inadequacies.   To be clear though the tester can only provide the supporting effort since it is the role of project management, the prospective customer, and those actually coding and configuring the solution that must specify and drive the implementation of the requirements and the design. 

So despite the secondary support role the testing effort is inextricably linked to the documented sets of requirements and the design specifications that  in essense define what the software is intended to be and how it is to behave.   The testing effort relies on these to establish what will be a baseline for comparison.  As new features get rolled in and introduced the old features must remain the same.   Testing and verifying this is often called regression testing.   Testers perform tests against the already existing features to make sure that they behave and perform in the same manner as they did before the introduction of the new features.   This allows for incremental updates that allow for easier isolation of problems introduced by configuration or code changes.

The reality is that there are very lengthy books discussing at length the long list of methods used in the discipline of Software Testing.  Here we have more modest intentions we hope to outline the essentials that will help the project manager understand what is needed to assess if the testing performed is sufficient coverage for the deployment of a given piece of software.    The best tool for the project manager to obtain and use to measure this sufficiency is the requirements traceabilty matrix.   What this is is essentially a map typically laid out in a spreadsheet that shows what the requirments are, the features actually built in placed in the design and the testing results of each feature.   For the project manager this clearly shows at a glance the overall state of the project.   While some of these spreadsheets can be very large and some project managers don’t exactly use this as I have described it is indeed a true map of the overall condition of the software.  

Another very useful tool is the bug trend tracking chart.  If properly implemented and the project is successfully moving forward the trendlines on the chart should show a nice bell curve where the number of new and open bugs rise and then fall in a typical bell curve.  This means that new bugs are becoming scarce and open bugs are being fixed which means that the software is on reasonably definite path toward being stable, functional, and usable.  

Between these two and closely watching the various parts of the project timeline and coordinating the integration of each piece we can be sure that we are on track toward seeing a successful launch and deployment.  Of course once the software is deployed that does not mean the work is done.  There is much support and maintenance that will be needed.   Also updated versions with additional capabilities are typically expected.   This is the general nature of the software development cycle.   At this point we can have high confidence that the test coverage provided is sufficient that is unless you have good reason to question the results that the testers are providing.  In general testers are keenly aware of the consequences of malpractice and providing test results that miss critical or severe bugs.   If they are paid professionals this is their role and the responsibility that they are expected to fill.   As a project manager your primary role and task is to make sure that all the players on the project fill the responsibilities of their given role well. 

The Cost and Utility of Types of Automated Testing

Automated Testing is not a new capability in the software development world.   That said despite the most earnest promises of large and well established vendors there are clear limits and specific uses which determine the effectiveness of a given implentation.   The reality is that even today the capabilities of computers and IT is such that there are no low cost machines that can apply the nuanced judgement that is required for varied sets of complex problem solving that is required by testing IT and software deployments.  What current automated testing is suited to do well is repetitive regression testing.   For what is called GUI (graphical user interface) testing the two largest vendors supplying tools to do this are HP/Mercury and IBM/Rational.   Notably the actual names of the tools that have done this have changed over the years but for now it looks like both venders have settled on calling their product some variation of the name Functional Test.   What these tools do is identify application components of an application under test and allow a tester to script a test that will be exercised on initiation of script execution.  By simply using the push of a button the script will click on a button or enter text into a field or apply a menu pull down or scroll down in a scroll box.    What this means is that the tester can deploy a script routine to perform predetermined actions that execute along a prespecified path that will only vary if specific inputs are provided to tell it to how to vary its path.   They also can automatically capture errors that occur.   What it cannot do that a human can do is dig down and investigate and troubleshoot the error once it occurs.  The human can have the capability to go one step further in helping to provide resolution to the issue.   

What does it take then to setup this kind of  test automation deployment.   Well, if you go with IBM/Rational or HP/Mercury then it can rapidly get very costly.   $6000 or more a seat / user license is the starting point and that does not include additional plugins.  That is just to have the raw capability… this also does not include the time and expense related to hiring someone for creating the scripts and the added support costs that these two vendors charge.    Large corporations are often most certainly willing to pay this.   Less large companies find it cost prohibative.   There are other open source options such as Selenium and Ruby/Watir which are entirely free and have vibrant and open communities willing to share workarounds.   That said even given this these tools have the same constraints as the vendor supported tools specifically the inability to do diagnosis and troubleshooting that remain well within the exclusive domain of human capability.   They are a better choice in that the base cost is eliminated and the ability of getting support is more likely however the actual development costs remain.   And let there be no confusion automating a test effort is a true development effort. 

The test automation effort is in fact a parallel development effort.   Test automation excels and is best suited for regression testing over all other types of testing.   It takes time to configure and rescript a test as slight as changes might be that have occured for the software being tested.   That is the reality.   Though there can be some anticipation there is little automation of prediction the actual changes.  This means the automation script must be changed by hand somehow whether or not by parameters in a spreadsheet or actual lines of code it must be changed.   What this means is that GUI test automation is best suited for applications whose interface does not actually change and is stable from update to update.  That is where the best ROI will come.  Otherwise with a constantly changing interface the only limited benefit can be derived by being able to execute a script in such a way that it saves time and effort for the tester.  

What does the Failure of Healthcare.gov mean about present Software Development Methods

The most important thing to recognize this was a primarily a failure of leadership and not the technicians who were at work on the deployment.  Whoever the primary stakeholders were they made a miserable decision to go ahead with failed deployment October 1.   It was ultimately up to them to ascertain whether or not the deployment would be sufficient enough and they missed it.   Clearly they failed terribly in arriving at a reasonably reliable assessment and paid severely in the resulting outcome.   This is well documented and many on both the Republican and Democrat sides have blasted this decision.  There is not a lot new to be said about this.  Sadly for them the vitrolic political conversation which has done little if anything beneficial has helped contribute to anything problematic occuring with government these past few years and their project was open season to be sniped on.    However there are some potentially useful things that those of us in software development can take away and hopefully learn from in order to avoid such a catastrophic outcome to our projects.  

First thing of importance is to structure your project in such a way that it is set up for success.   This means to identify the key players, the exact requirements of the deliverables and solution to be deployed.  It means setting up effective metrics and tools that allow for sufficiently tracking the progress and state of the project.   It means making sure the lines of communication between key players is open and effective and achieves the necessary results for success.  It means that key milestones are being met in a timely way on the project timeline.  It means that results are measurable and features and performance behavior are tracked.  It means that shipping dates and deployment dates must slip even if the market timing is not in our favor.    There are always setbacks and obstacles to overcome but it seems that the project managers of the HealthCare.gov project forgot these things.   There has been made much of the lack of transparency and open process among key teams of the development effort.

Instead of heading the warning signs when October 1st came the hapless Healthcare.gov project managners deployed anyway and were unprepared for the inevitable outcome of rage and frustration by nearly everybody concerned.   It maybe that they had no choice and were told that they must deploy or else by the White House.  Who knows but they deployed despite either the lack of indicators or in the face of contraindicators.  Then things got very uncomfortable very quickly.

What it means for us who chose to learn from others mistakes is that these things were avoidable.  We must set up our projects for success and not be pushed into arbitrary timelines.   One key thing to learn from the Healthcare.gov effort is the critical importance of identifying and confirming through testing the existance and actual functionality of service level agreement among independent and distributed services that the primary site (in this case healthcare.gov) is relying on to generate the necessary data to complete a transaction. 

What else can be learned?   I’m sure many object lessons can be highlighted.   I will stick with the obvious and key takeaway that for those of us in development that the primary lessson should be that we make sure we structure and plan and execute our projects in such a way as to set them up for success.   We need to create the necessary mechanisms for tracking the project and put them into place make sure to encourage and open up communication channels among the key players and get out of the way.  

To attempt to be fair to those involved with the Healthcare.gov project the reality is that they operate under many government restrictions that we do not see in private industry.   The regulations very likely caused unpredicatable delays and contributed significantly to their misjudgement and ultimate failure of their initial launch.