And here is why:
It is against the spirit of web standards
The whole reason that web standards exist is so that we don't have to write specific code for specific environments. We should write code that adheres to established standards and software in charge of displaying our code should display it as the standards dictate.
It relies on the browser user-agent string
... which has a hilariously disastrous history and is easily spoofable.
It can hinder devices
Example: you detect for the iPhone and serve is special content. Now the iPhone can never see the web page as other browsers see it, despite it being fully capable of doing so.
So why do we do it?
We do it because different browsers handle things differently and browser detection can get us out of a pinch and get things working how they should. You can hardly blame us right?
Often the situations leading up to us resorting to browser-detection are rage-inducing. But remember it's often not the browser that is at fault. Even in the case of IE 6, it was the most standards-compliant and advanced browser of it's time when it was released. And some of the standards that we have today were not complete at that time.
What should we do instead?
I'm the first to admit that real-world web design sometimes needs quick fixes, budget-acceptable solutions, and making damn sure features work as intended. This doesn't always allow for altruistic choices that leave out some functionality because it is the "right thing to do."
... we would do capability testing. That's the information we really need right? Test if the environment we are in is capable of what we want to do. If it is, do it. Easier said than done, I'm sure, and myself I'd hardly know where to begin. But I'm sure some of ya'll are very smart folks and can get it done (or are already doing it!)