This paper was written and posted in two parts in Sept 2000 and May 2001 and published in the 2001 Borland Conference CD.The Case for XP
Extreme Programming (XP) is a self proclaimed ‘light methodology’; a set of practices for small to medium sized development teams to help ensure quality and high business value in the code they produce, while eliminating excessive overhead. This paper analyzes approaches to modern software development, then introduces XP and its practices
Risk Management
All software development projects have to deal with risk. Most projects are unique, containing at least one, if not several elements that are new. While companies and the ground they and their development projects cover certainly overlap, each have their own complex peculiarities:
The hard thing, I think, for a lot of us writing business systems ... is that it's very difficult to understand how to structure these business systems well. ... It's one thing to design GUI systems or operating systems or database systems; those are inherently logical things. But the way business is run, like say, a payroll system, is inherently illogical - and that's what makes it so much harder to do that kind of thing. (Fowler “Future of Software Development” 25:30)
Companies are about meeting customer's needs and, usually, turning a profit in the process. Building a logical, consistent infrastructure is most times of secondary importance. Bending over backwards for customers (and shareholders) usually does not yield a neat, tidy stack of business rules; the rules change as the opportunities for better service and/or a better profit change. We can't complain too loudly lest we bite off the hand that feeds us, but acknowledging the difficulty of corporate development problems is important.
In addition, the people and technology resources used to solve these problems greatly vary from project to project. Finding people who can understand how the business works and manipulate the technology at hand, which often changes too fast, can be difficult and certainly is risky. Moving forward with a thick handful of unknown issues requires sound risk management
Risk Management Strategies
Forbes magazine in August, 1997 published an article titled, “Resilience v. Anticipation.” In it, the author quotes UC-Berkeley political scientist Aaron Wildavsky, from his work titled “Searching for Safety.” Wildavsky lists two categories of risk management, anticipation and resilience.
Anticipation is a mode of control by a central mind; efforts are made to predict and prevent potential dangers before damage is done. Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back. ... Anticipation seeks to preserve stability: the less fluctuation, the better. Resilience accommodates variability; ... The positive side of anticipation is that it encourages imagination and deep thought. And it is good at eliminating known risks. It can build confidence. But anticipation doesn't work when the world changes rapidly, and in unexpected ways. It encourages two types of error: hubristic central planning and overcaution. (Postrel)
Change in Software Development
Since the presence of change has an impact on the effectiveness of risk management strategy, how important is dealing with change in software development?
Change is a fact of life with every program. ... even during development of the first version of a program, you'll add unanticipated features that require changes. Error corrections also introduce changes. (McConnell 98)
On a typical sample of roughly one million instructions, a study at IBM found that the average project experiences a 25 percent change in requirements during development. (McConnell 30)
[A]ccept the fact of change as a way of life, rather than an untoward and annoying exception. ... the programmer delivers satisfaction of a user need rather than any tangible product. And both the actual need and the user's perception of that need will change as programs are built, tested, and used. (Brooks 117)
Anticipation Methodologies
An anticipation methodology attempts to identify and solve as many, if not all, problems prior to coding. (Fowler “The New Methodology”) A full write-up of requirements is commissioned, followed by a period of architecture development. The hope is that the requirements and design stages will have uncovered all major problems that could occur during the subsequent coding phase. (McConnell 27).
There are some problems with this. The customer probably has limited knowledge of the business. Business processes are executed by numerous people. The knowledge of the entire process usually only resides in the collective minds of the employees. Even if one or a few people has an inclusive grasp on the business processes, they usually have a limited ability to fully recall everything known about the business beforehand. “On a typical project ... the customer can't reliably describe what is needed before the code is written. ... The development process helps customers better understand their own needs, and this is a major source of requirements changes.” (McConnell 30).
Likewise, the programmers on a team have either limited knowledge of the technology to be used on the project or at least the same inability to fully recall every issue that will arise during the course of the project. Not so? Consider optimization of code. Today's common sense approach of properly optimizing code is to
not start with optimizations. Attempting to predict where bottlenecks will occur prior to coding has been shown to be fruitless in enough cases that it's usually a waste of time. Waiting to profile finished code is a much more effective way of identifying bottlenecks (McConnell 681).
The business itself can also change during the course of development. If a new client can be brought on board resulting in a 100% increase of revenue at the price of changing some internal processes, more than likely the choice will be made by management to pay to price, and the programmers will be the one to pay it.
Anticipation seems a poor choice in software development, because it depends on an unrealistic amount of mental ability and a small amount of change. (There are some development situations where the technology is not new and the customers and programmers both have prior experience with a very similar project. In these cases, an anticipation approach is more feasible). Adding to the problem is the fact that change within an anticipation methodology can be quite expensive. “Data from TRW shows that a change in the early stages of a project, in requirements or architecture, costs 50 to 200 times less than the same change later, in construction or maintenance. Studies at IBM have shown the same thing.” (McConnell 25).
Anticipation methodology has goals that are both hard to meet and expensive if not met
Handling Changes During Development
If anticipation suffers from putting too much distance between the time a problem is created (e.g. requirement not discovered), what are some approaches to dealing with this?
Steve McConnell lists 5 ways of handling requirements changes during the coding phase of a project. Oddly, 3 of them don't really address handling a change during the coding phase. The first suggestion is to use a ‘pre-flight’ checklist of common requirement categories to ensure the requirements are good. The second suggestion is to dissuade the customer from making changes by rationally explaining how expensive a change can be. The last of the 5 options is to “dump the project”! (McConnell 31).
The two items that actually involve changes during coding ironically reflect a resilience risk management approach rather than anticipation. He recommends setting up change-control procedures as well as adopting a prototyping approach. (McConnell 32). In fact, support for iterative development techniques abound:
Iterative Development
Evolutionary delivery is an approach that delivers your system in stages. You can build a little, get a little feedback from your users, adjust your design a little, make a few changes, and build a little more. The key is using short development cycles so that you can respond to your users quickly. (McConnell 32).
One always has, at every stage in the [iterative] process, a working system. I find that teams can grow much more complex entities in four months that they can build.
... one of the most promising of the current technological efforts, and one which attacks the essence, ... is the development of approaches and tools for rapid prototyping of systems as part of the iterative specification of requirements. (Brooks 199-201)
The most important .. part [of handling change] is to know accurately where we are. We need an honest feedback mechanism which can accurately tell us what the situation is at frequent intervals. The key to this feedback is iterative development. (Fowler “The New Methodology”).
McConnell also has a section in his book
Code
Complete covering Evolutionary Delivery (664) which further illustrates iterative development
XP is Resilient
It's unfortunate that no one has ever written a book about incremental approaches to software development because it would be a potent collection of techniques. (McConnell 654)
Kent Beck, one of the founders of XP, authored an introductory work titled
Extreme Programming Explained: Embrace Change which attempts to do what McConnell wished for.
XP has four values which serve as a foundation for the XP practices
Communication
XP encourages extreme communication. If the customer needs to see the program in action to fully formulate requirements, put the customer on the development team and churn out working versions every 2 to 4 weeks, developing requirements along the way. If the programmers need clarification on requirements, the customer is a member of the team -- lean back in your chair and ask the customer.
Programmers work in pairs: two people to every one machine. Pair members rotate regularly. All code is owned by the team, not by individuals. This promotes communication of technical knowledge throughout the team. When technical challenges arise, the team is more able to address the problem.
Pair programming also is excellent in matching up programmers of differing abilities. Less experienced members are constantly mentored, and the risk of less experienced code being added to the application is minimized
Feedback
The faster change is identified, the faster it can be dealt with. “In general, the principle is to find an error as close as possible to the time at which it was introduced.” (McConnell 25). XP strives for the fastest feedback possible in every aspect of the project.
Unit Tests are written for most every piece of production code. Unit tests must run at 100% before any code is checked in. When production code needs to be changed, side-effect problems are identified. After new code is checked in, it's immediately integrated with the latest changes from the rest of the team and again, all unit tests must be made to run at 100%.
Acceptance Tests are written with the customer to verify the application is doing what the customer needs it to do.
Pair Programming provides constant code reviews. No more dreary code review meetings -- put two sets of eyes on the code as it's written. Collective Ownership of the code by all members of the team helps ensure even more eyes will see the code, increasing the amount of code review performed
Simplicity
The customer is the ultimate driving force in XP. Do what the customer needs, as simply as possible, and nothing else. Take on a mindset to reduce complexity from the beginning.
All code should be refactored as often as possible. Refactoring is a process of improving a code's structure without changing its functionality. Refactoring produces highly decoupled objects which makes them easy to test, easy to use, more flexible, and therefore, more changeable
Courage
No more fragile code. With smooth communication, quick feedback and simple code, programmers have the support they need to dive into changes when they come ... and they will come
XP Practices
XP has 12 core practices that implement the 4 core values. Beck readily acknowledges that ”...none of the ideas in XP are new. Most are as old as programming. There is a sense in which XP is conservative -- all its techniques have been proven...” (Beck xviii).
What's new in XP is the emphasis on practicing all of these ideas continuously:
- “If code reviews are good, we'll review code all the time (pair programming)
- If testing is good, everybody will test all the time (unit testing), even the customers (functional [acceptance] testing)
- If design is good, we'll make it part of everybody's daily business (refactoring)
- If simplicity is good, we'll always leave the system with the simplest design that supports its current functionality (the simplest thing that could possibly work)
- if architecture is important, everybody will work defining and refining the architecture all the time (metaphor)
- If integration testing is important, then we'll integrate and test several times a day (continuous integration)
- if short iterations are good, we'll make the iterations really, really short ... (the Planning Game).” (Beck xv).
On-site Customer
Every XP project has one or more individuals to fulfill the customer role on the team. The customer's job is to write and prioritize stories (tasks from a user's perspective that the software must perform), assist with acceptance testing and be on hand to answer questions from the development team as they arise. It's typical for the customer to continue on with regular duties of their job, but it's important they physically office with the development team.
The project starts and ends with the customer. The customer determines what must go into the product and declares the work successful through approved acceptance tests which were authored by the customer.
Having the customer be an active member of the team provides for frequent and cheap communication. There's a much smaller need for formal documentation with direct communication. If the customer is physically separate from the team, intermediate forms of communication (conference calls, email, documents) become more valuable because it's harder for the developers to get the information they need from the customer directly. Maintenance of this information can become a project in and of itself and distract the team from working on the actual product.
An involved customer can see the product as it evolves and can redirect development efforts as requirements and the fulfillment of those requirements come into focus. Changes are identified quickly and in smaller, more manageable portions with fewer side-effects
Metaphor
The project metaphor is, more or less, an informal architecture of the system. The metaphor describes the system in simple concepts. The concepts can be literal or figurative, depending on the clarity of the actual system.
Perhaps the system resembles a post office, an assembly line, or an anthill. The point is to pick something common enough that each member of the team can understand. Since an XP project has little to no formal architecture documentation, the metaphor can be a useful tool to aid in communication among team members, especially between programmers and customer. A good metaphor can sometimes inspire improvements to the application itself. But it's important to remember the metaphor is simply a tool of communication and should be changed as the needs of the project change
Small Releases
Small releases are a key part of generating feedback and making a project resilient.
An XP project is a series of iterations, each lasting 2 to 4 weeks. Each iteration starts with the Planning Game, an activity that determines the tasks for the current iteration, and ends with a ‘finished’ product: all tests pass and the product is as functional as possible.
Each release may or may not be a production release, especially in the early days of a project. Most projects simply aren't useful until a minimum feature set is complete. But it's important to approach each release aiming to give the customer the most business value possible. When technical items are carried over from iteration to iteration, they can accumulate additional functionality that may not be needed.
One project I worked on spent a good bit of time constructing 26 elaborate reports for the system. Once the application got to a beta stage, it was put into a production environment in 3 locations, and all 3 quickly made good use of the product. 4 months into the beta, a problem with the reporting module was detected during development. It was a major bug -- none of the reports could be opened. It wasn't long before the development team wondered why none of the 3 beta sites had come across this problem. It turned out that none of the beta sites had even tried to use the reports. If they didn't need the reports, was it really necessary to build them?
In another situation, a new data entry form was being added to an existing application. The GUI for the form had been designed to be as easy to use as possible. Implementing the design required use of some third party GUI tool that was complex and time consuming to work with. Many additional weeks were added to the timeline tweaking out the GUI. After it was complete, the new form was still useless, however, because the data entered into the system through this form had not been included into other processes required in the application. Following the practice of small releases that are as close to functional as possible would have delayed the more elaborate GUI features to focus on including the new data elsewhere in the app. If an early iteration worked well enough for the customer, the more elaborate GUI might have been easily scaled down to a less efficient, but still quite usable form, allowing the customer to move on to more important items.
As noted earlier, many times customers need to see a working product to help crystalize requirements. Sometimes verbal agreements on design decisions don't stand up in the face of actual implementation. It's important to keep a working product in front of the team to generate precise feedback.
Small releases also help the technical members of the team give accurate estimates. Any time estimate beyond 2 to 4 weeks tend to be very imprecise
Planning Game
Each iteration begins with the Planning Game, an informal process that sets the agenda for the iteration. The game starts with the customer defining requirements, or ‘user stories’. Technical members work with the customer to normalize these stories into manageable chunks and break them down into specific tasks, as well as introducing technical tasks needed to support the customer's requests (e.g. upgrading development software, automating builds, etc). Then the developers place ideal time estimates on the stories/tasks. Based on the time estimates given combined with the team's velocity (average number of stories/tasks the team has completed in past iterations), the user stories are prioritized by the customer. Programmers then sign up for tasks and the development section of the iteration begins.
These first four practices allow for cheap, easy and frequent feedback to add resilience to the project. The faster requirements are discovered and changes are identified, the faster the project can proceed in the right direction
Pair Programming
All programming on an XP team is done in pairs, two people at one machine. Each task from the Planning Game is owned by an individual. When the day starts, pairs form up, each person either pairing to help someone else, or requesting help on their own tasks. Pairs stay together until a logical break comes up. One takes a turn ‘driving’ while the other actively participates verbally. As ideas flow between the two, the keyboard can be swapped off as often as necessary to get the best code on the screen. Pair assignments are fluid and change throughout the course of a day.
The benefits of pairing are numerous. One thing pairing provides is constant code review -- no line of code is written without two sets of eyes. This can reduce or eliminate the need for code review meetings which can often be boring and wasteful.
Pairing helps distribute knowledge of the code more evenly throughout the team. This can eliminate personnel bottlenecks due to illness, vacations or team changes. If only one team member understands how a crucial layer of the system functions, progress can be severely derailed if that individual leaves for another job.
Pairing provides balancing of talent within the team. A project that assigns portions to specific individuals can suffer from varying quality as the functions written by the senior members work well, but are held back by the bugs and poor design of the functions written by junior members. With the pairing process, no junior level member has to write code alone. They can be constantly mentored through the pairing process which will educate them on the job and build up the quality of the code throughout.
Individual programming moves a project towards the average abilities of the team. Pair programming moves a project towards the maximum abilities of the team.
Pairing can also be faster than individual programming. When writing code, one rarely works in one mental level. Switching levels takes a bit of time. Having two brains working on a problem allows thought to occur at more than one level simultaneously. Two people can leapfrog each other. While one writes code, the other can think at a higher level, organization of code, where to go next, tests that still need to be written.
A pair of programmers also has reduced distractions. There's built-in accountability to ensure the mind and mouse don't stray too far from the task at hand. Writer's block (as well as Debugger's block) is less apt to steal time away from two people.
Pairing is probably one of the most controversial XP practices because of a number of challenges facing it. Management may be a hard sell because they see pairing as a waste of resources. Many of the benefits of pairing are long term and hard to see up front. Other benefits, like increased development speed, may have to be experienced to be believed because it's easy to assume development is a simple labor task when in fact there's a high degree of creativity and thought that must go into it. The math that says, “Two can work twice as fast as one” simply doesn't hold up for many development tasks.
While management may struggle with pairing, many programmers take issue with the practice as well. Successful programmers depend on good tools and sound structure. Introducing pairing, which requires sharing of tools and practices, can create friction as differences rub up against one another. Everything from tool sets to work environment can be up for debate. In addition, many programmers believe that they do their best work alone, or don't enjoy the mentoring aspect of pairing with a less experienced person.
Collective Ownership
Collective ownership refers to the code. Many development efforts assign specific portions of the application to individuals. Sometimes programmers and managers like this because it can be easier to measure individual contributions to the project. However, the potential for bottlenecks with individual ownership is higher than with collective ownership.
When only specific individuals are familiar with certain portions of the codebase, others on the team can be held up waiting for changes to be made. In addition, the amount of work needed for different sections of a project can change during the course of development. If individual ownership of each section is promoted too much, work loads can become lopsided.
Collective ownership allows anyone on the team at any time to work with any piece of code. If a pair working with object A needs object B to change, that pair can go immediately make the change in object B to accommodate the needs of object A.
The practice of quickly dipping into a related piece of code to make a quick change can be dangerous, because quick changes can often times create side-effect bugs. In an XP project, however, every piece of code is developed test-first, ensuring each piece of functionality is unit tested. If the quick edit fixed one thing but broke several others, the test suite will be run shortly after the edit was made and report the errors immediately to the programmer who just made the edit. If a bug slips through at this level, acceptance testing provides a second net for catching the problem.
Pairing will be more difficult if the code any pair is working on is ‘owned’ by only one member. The non-owner of the pair may not feel comfortable suggesting or making changes to code that they don't have rights to.
Pair programming and collective ownership does not snuff out individual contributions (good or bad). Bonus plans and other types of performance rewards may have to be changed.
XP aims to avoid hang-ups caused by individual ownership by focusing ownership of the team on the whole. Take pride in the whole, not in the individual contributions to the whole. The project itself must succeed as a whole to be effective, and that should be the goal of any development effort.
One concern about collective ownership is ensuring each programmer is a positive contributor to the team and not riding on others’ efforts. An XP team is very people centric, relying on a lot of interaction (especially with pairing) and verbal communication. It would be difficult for a non-contributing programmer to hide out for long in this environment. In fact, it would be easier to hide out in an environment where individuals hole up in their cubes for weeks at a time while they code their own thing.
Another concern with collective ownership is building expertise with the application. There are limits to the familiarity an individual can have with the codebase. Individual ownership helps solve this problem by limiting the amount of code a programmer creates to promote expertise in certain areas. Documentation of the code helps to cover the gaps as well. However, documentation must be maintained and while individual ownership can help with code expertise, the fact is it's still easy to out-code the limits of a programmer's memory.
Collective ownership doesn't help in this regard because it's concerned about overcoming individual ownership bottlenecks by spreading knowledge of the whole codebase to every member.
XP answers this problem of code familiarity by moving the responsibility from the developers to the testing structure (unit tests and acceptance tests). Instead of relying on the programmers or documentation to remember every aspect of the code, XP makes the code responsible for maintaining its own accuracy with unit tests
Testing
The idea is that the developers are responsible for proving to their customers that the code works correctly; it's not the customer's job to prove that the code is broken.
-- Kent Beck
Testing is a crucial practice on an XP project. XP succeeds by making a project resilient. Resilience means accurate and frequent feedback. Testing provides this.
In XP, there are two categories of tests: unit tests and acceptance tests.
A unit test is a piece of code that exercises one aspect of a piece of production code. Unit tests should be small and fast. For example, given an object that does monetary conversions, a unit test would go through the following steps:
- 1) Create an instance of the conversion object
- 2) Pass in an amount of a known currency
- 3) Request a conversion to another currency
- 4) Compare the output from the object with a hardcoded expected value
This test is now a sentry assigned to monitor this piece of functionality. At the time it's written, it may seem too small a thing to worry about. However, the amount of code in even a small project can grow very quickly. Without an automatic way to check every important detail in the code, soon the code will not be exercised regularly. Growing a collection of unit tests from day one becomes an extremely powerful tool later on in the project.
All unit tests must pass 100% before a programmer can check-in any new code. After check-in, integration is done on a separate machine with all of the latest changes and again all unit tests must pass. Any problems should be fixed immediately.
To help ensure that a suitable suite of unit tests are built throughout development, a coding practice known as “test-first” programming is implemented. Before adding a new piece of functionality, a unit test exercising the non-existent code is written. Then, enough production code is written to allow the test to compile, but not pass. The unit test is executed to ensure that it really does fail. If it passes at this point, there's probably a bug in the unit test. Then production code is written until the test passes. This style helps ensure a consistent test suite is developed alongside the production code.
Having a consistent test suite eliminates fragile code. In the later stages of a project, programmers are usually hesitant to revisit previously written code because it's either buggy and/or messy or because it's solid -- which pretty much runs the gamut. The reason is the same regardless of the quality of the code -- fear of making a mistake and introducing new problems or breaking things that have been working.
The unit tests remove this fear. Programmers can make bold changes and see the cost of the change immediately by re-running the test suite.
Unit testing also promotes good design. It's easy to create highly coupled code without unit tests because for every one production object often times there's only one other production object using it, at least when the code is first written. Unit tests ensure that everything tested will have 2 clients from inception -- the production code that uses it and the unit test that tests it. Requiring code to be responsive to multiple clients forces less coupling of objects which promotes long term flexibility. This flexibility is a crucial contributor to resilience.
As an added bonus, unit tests make for great usage documentation. Separate example code does not need to be written. If unit tests are written thoroughly, example code exists for free. This can greatly reduce if not eliminate the amount of internal documentation required.
Acceptance tests are distinguished from unit tests in a couple of ways. First, they should test the system end-to-end. Second, the customer is involved in creating the acceptance tests. Ideally, a framework should be built that allows the customer the full ability to add new tests for the system without any technical intervention. For example, an XML file could store input/output data for each test, and a framework coded to load each case from the XML file and execute it. Unit tests ensure each technical detail is working properly. Acceptance tests ensure each customer requirement is working properly.
The testing practices are a main support for eliminating up-front design. It doesn't have to be done right the first time because it's cheaper to change it on the fly, it's resilient. Refactoring also plays an important role in creating resilience
Refactoring
“How do we actually go beyond just keeping [a project] going [to] ... improving the quality of the design as we go forward. I think it's that area that's always been the problem. Maintenance has always been the thing that no one talks about. [Refactoring] is one technique along this direction.” (Fowler “Future of Software Development” 35:30)
“Hiding the areas in which you anticipate changes is one of the most powerful techniques for minimizing the impact of changes.” (McConnell 98)
The topic of refactoring is a large and important one in and of itself. Within an XP project, it is a crucial contributor of resilience.
Refactoring is the process of improving the design of code without changing the functionality. The code should be clean and readable. Any duplication should be consolidated.
Refactorings should be done on an ongoing basis throughout development of the code. As soon as structure improvements make themselves known, they should be done. In many environments this would be risky, as the tendency for developers to introduce side-effect bugs while making changes is high. But again, the unit tests provide the quick feedback required to steer a programmer back on track during a refactoring.
Refactorings also tend to get postponed because no new functionality is being created. However, allowing poorly structured code to exist in a project is a risk that accumulates over weeks of development. Refactoring is best practiced on a regular basis
Simple Design
To help ensure frequent feedback, it's important that the application's design be kept simple and kept to providing business value. While there will always be tasks that are primarily technical and necessary to support providing business value, it's important these tasks be kept as simple as possible.
The main reason for this is to eliminate unnecessary development time. It's easy for the technical members of a team to overdo aspects of the project because we might need it. Here we come to a couple of XP's home grown acronyms: YAGNI and DTSTTCPW.
YAGNI stands for “You Aren't Going To Need It.” Too often, the team attempts to build in functionality they might need in the future. This is a reaction from the anticipation style of doing things. If future changes are expensive, make sure you do it right the first time and think of everything the first time. YAGNI takes advantage of the resilience created by testing and refactoring. Since the provided resilience will allow for inexpensive future changes, we only need to build for today.
DTSTTCPW stands for “Do The Simplest Thing That Could Possibly Work.” One note here, Simple should not imply Poor Quality or even the Fastest, though speed is the main issue with simple design. Simple should include as much elegance as possible without getting burdensome. But the tendency is still towards simplicity over elegance. Here's why.
Thorough refactoring practices will cover any initial problems. The fact is, it's hard to get it right the first time. Simple design says, don't try to get it right the first time. Write the unit tests and get those passing in a simple way. If the code just written turns out not to be too important to the overall project, then it's okay if it's less than ideal, as long as it works (passes testing). If the code is crucial, it's guaranteed to be revisited many more times. The process of refactoring allows the less than ideal but simple design to have its kinks worked out, and no one wasted a lot of time trying to get it designed right the first time
Continuous Integration
Mixing the latest code from each programmer together can be a difficult process, especially if this task is not done often. To stay resilient, after writing new code that passes all tests locally, programmers must then integrate their changes with the latest code base and ensure all the tests still pass. If not, fixes must be made right away until all tests again pass.
It's recommended this task be done many times a day, usually on a dedicated integration machine
Coding Standard
Having a coding standard for a project is a commonly accepted practice in most projects regardless of methodology. This practice is equally important within an XP team, especially in light of Collective Ownership and Refactoring practices
40-hour Week
XP promotes a well rested team. Its founders do not believe in the sweat-shop mentality. Tired workers make mistakes and start desiring a new job
Bibliography
Beck, Kent.
Extreme Programming Explained: Embrace Change, Addison-Wesley,
ISBN 0-201-61641-6, 2000.
Brooks, Frederick P., Jr.,
The Mythical Man-Month, Anniversary Ed., Addison Wesley,
ISBN 0-201-83595-9, 1995.
Fowler, Martin. “The New Methodology”. 12 Sept 2000
http://www.martinfowler.com/articles/newMethodology.htmlFuture of Software Development, The, video stream from
Dr. Dobb's technetcast of Software Development 2000 West panel discussion,
http://www.technetcast.com/tnc_play_stream.html?stream_id=227McConnell, Steven C.,
Code Complete, Microsoft Press, ISBN 1-55615-484-4, 1993.
Postrel, Virginia I. “Resilience vs. Anticipation”
Forbes ASAP 25 Aug 1997. 11 Sept 2000
http://www.forbes.com/asap/97/0825/056.htm
(cc) Chris Morris, 2012.
This paper originally published at
clabs.org.