Once upon a time, in the bad old days, projects were spec'd out well in advance. Reams of developer documentation, design documents and specifications were written by "software architects". The whole thing was signed off on by managment, and months into the project, the actual software development would begin.
The architects, having conceived of separate parts of the system in their separate ivory towers, and having cashed their cheques and left the building, would have passed on their great wisdom to the programmers, whose job it would then be to translate their documents and drafts into working software.
And naturally, any issues found when attempting to implement the solution would require rounds of meetings to sort out. Architects would have to be rehired, or worse still, ignored. Software would just happen.
As for software testing, it would be the bailiwick of very harried folks, getting the software mere days before their (oft-delayed) release date and subsequently being micromanaged to the hilt to make SURE no bugs crept out with the software.
Agile changed all that.
Agile's PRINCIPLES are outlined in a manifesto document that's already been noded. But principles are one thing, what are the common practices that make this model work?
Product organisation
Products are typically organised into small iterations called sprints whose main goal is to define two to three weeks' worth of work with defined goals and measurable results. At the end of every one of these, there should be tangible and demonstrable products to show to the customer (either the explicit customer, or the customer's advocate). This results in constant, measurable progress, but it also means that the customer is constantly involved in the shaping of the product, from start to finish.
It also means that the customer can see the product at every stage of its development, play with the results, and approve or disapprove of the direction in which it is taking.
Work is divided into sprints ahead of time, and stories (a story is a description of some service or process the software is to perform) are defined. At the beginning of every sprint candidate stories are chosen to make up the goal of the sprint, and the development team then estimates the size and level of effort of work of each story (measured in "points", an arbitrary measure of size). Then, the stories are broken up into tasks and the number of hours for each are estimated. After a while the team gets good, through experience, at figuring out how many "points" can be achieved in a sprint, and get more adept at estimating task size in hours.
During the sprint, developers take tasks, assign themselves to them, and as they're completed, mark them off as complete. This can either be done in software (like XPlanner) or manually, by sticking cards on a wall and moving them from one spot in the wall to the other. Either way, anyone walking by can see how much has been accomplished at any given time, and isolate problems, pull in more work, or have a meeting to see what's holding up existing work.
At the end of each sprint, the progress of the team is demonstrated to stakeholders, and then the team usually meets to talk about what went well and what went poorly.
Project implementation
Two VERY complimentary technologies that most Agile projects make heavy use of are Test Driven Development and Continuous Integration.
Test Driven Development insists that software tests are written before the software itself can be written. In other words, before you write a function to add two numbers, you need to write software (usually referred to as unit tests) that will exercise it, and use as many representative tests as will be required to describe correct behaviour. In the above case, you'd want to test adding zero to a positive number, a positive number to a positive number, a negative number to a positive number, etc. Software is considered complete when tests pass. This is in marked contrast to earlier ideas of over-architecting software: the principle of YAGNI (you ain't gonna need it) says make it work, THEN make it work better. Software products to make Test Driven Development work include JUnit (for the Java programming language) and its siblings CppUnit (for C++), NUnit (for .NET development) etc. These products not only provide a framework for the development of these tests, they also provide a way for them to be automatically run, and the results can drive any of a number of other processes.
In order to make your software work with software which either does not yet exist or will be difficult to automate, products like EasyMock allow you to create "mock" versions of software entities that will behave like their real life counterparts.
The second is Continuous Integration, which means that some build system builds the software on a regular basis and runs any unit tests that exist. It is easier to fix a bug when it's introduced into the system (because, for example, someone checked in code in one place that turns out to affect the system elsewhere) and the changes are fresh in the mind of the development team, rather than later when they are discovered in testing. The two go hand in hand. No software exists without unit tests to test them, and every time changes are made, the software as a whole is automatically built and put through its paces by a machine, with any compilation problems or unit test failures reported immediately. Cruise Control is the most common continuous integration product.
As a result, a team can react quickly to any inadvertent bugs in the system as well as any changes required by the customers or stockholders. Because the customer (or his representative) is always present, software is not created in a vacuum. The higher level leads and the grunt developers are always cheek to jowl, so there's a constant flow of communication between all concerned. Because details are built into the system in two week integrations, the customer is near-constantly given feedback as to how the system is being built. Machines, having been set up right, constantly verify and document all built code. As the sprints follow one after another, the project takes shape according to the wishes of all concerned, but to the best of anyone's ability, its quality is constantly being evaluated and preserved. Technically, one could dip into the build system at any point and retrieve stable, working and demonstrable software.
This represents a quantum leap in software development methodology.